Software development projects range from the very small to the very large and encompass a wide range of complexity, starting from software developed by a small team in their spare time and ending with projects lasting several years and involving the work of many people.
Similarly, the concept of what it means for a software development project to succeed also varies according to the context. For the small team developing an open source solution, it could be the intellectual challenge of solving a complex problem or the satisfaction of contributing to a community. Time is not critical, nor are costs: having fun in the process probably is.
For people developing safety-critical systems, the challenge is different. They need to ensure that the system will perform as expected in a wide array of operational conditions, including those in which there are malfunctions. Quality and a controlled process are paramount in this context.
Finally, for people developing a web application or another desktop system, the most important aspect could be the the price that can be set for the product or making sure that the product is released before the competition. In this context, time and costs might be the main drivers.
Thus, the activities that are required or beneficial to develop successful software vary from project to project. Some projects might allow a more informal approach, while others are better served by a very structured and controlled process. Going back to the millefoglie example, the goal of this chapter is to present the “pastry,” that is, the activities that are needed to develop software. These are the technical building blocks for constructing software, that is, what we do in the “execute” phase of a software development project. These building blocks will be selected, composed, and organized in different ways, according to the project size and formality, the process adopted, and other management choices in Chapter 4.
The first step of any nontrivial software development project is to form an idea about the system that has to be developed.
Software requirements definition includes the methods to identify and describe the features of the system to be built. The main output of this activity is one or more artifacts describing the (software) requirements of a system, namely, the functions it has to implement and the other properties it has to have.
Software requirements are strongly related to the scope document, which defines the goals of a project and which we will see in Chapter 3.
There are two main output formats for these artifacts, textual or diagrammatic.
When the textual format is used, the requirements are written in English, in some cases using a restricted set of language or predefined linguistic patterns. For instance, special words, such as “shall,” might be required to indicate an essential requirement—see, for example, Brader (1997).
Concerning the structure, the requirements are often presented as lists of items, one per requirement. Another very common representation writes requirements in the form of user stories, using the following pattern:
As a [user] I want to do [this] because of [that].
The advantage of this approach is that each requirement clearly identifies the user, the function that has to be performed, and the motivation for the function to be implemented, something that helps identify the priority or importance of a requirement.
The diagrammatic notation describes requirements with a mix of diagrams and textual descriptions. Diagrams depict the interaction between the user and the system and the textual description explains the interaction using a sequence of steps. The most common graphical notation is that of the use case diagrams of UML and the corresponding textual descriptions are called use cases (Booch et al., 1999; Fowler and Scott, 2000).
The requirement engineering discipline includes the activities necessary to define and maintain requirements over time. Simplifying a bit, requirements engineering entails a cyclical refinement process in which the following steps are repeated at increasing levels of detail till a satisfactory level of know-how about a system is achieved.
In more detail, the process is composed of four steps:
Requirements elicitation is the activity during which the list of features of a system are elicited from the customer. This activity can be performed with interviews, workshops, or the analysis of existing documents.
Requirements structuring is the phase during which the requirements are annotated to make their management and maintenance simpler.
During the process, requirements are
Requirements evolve over time and a sound approach to requirement management also necessitates defining a proper strategy to control the evolution of requirements. We will see some of the issues in Section 4.1. Here, it is sufficient to mention that requirements are often annotated with
Figure 2.1 shows an example of a template of an annotated requirement.
User experience design has the goal of providing a coherent and satisfying experience on the different artifacts that constitute a software system, including its design, interface, interaction, and manuals. It is defined in International Organization for Standardization (2010) as the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.
The typical user experience design activities include
Requirements validation is the phase during which the requirements are analyzed to find
Different techniques can be used to validate requirements. We mention inspections and formal analyses. Document inspections are based on the work of a team that analyzes the content of documents and highlights any issue. The technique relies on the ability and experience of the team. Formal analyses use mathematical notations (such as first-order logic) to represent requirements and automated tools (such as theorem provers and model checkers) to prove properties about the requirements. Several notations and approaches are available; see, for instance, Clarke et al. (2000), Bozzano and Villafiorita (2010), and Spivey (1989) for more details.
Notice that the goals of this phase overlap with those of quality management. We will see more about verification and validation techniques in Section 4.3.
In the 1990s, the university where I teach—a complex organization in which different offices have considerable organizational autonomy—kept personnel records in different databases: one for contracts, another for teaching assignments, another for granting entrance to laboratories, to name some. The database was not connected; any change had to be propagated manually to all databases, causing inconsistencies, omissions, and a lot of extra work to try and keep data in sync.
Enterprise resource systems (ERP) are systems that can automate and simplify the processes of an organization, integrating the data and the procedures of different business units. These systems are usually composed of standardized components, which implement the main procedures of an organization in a particular business sector (e.g., government, logistics, services). Their introduction in an organization typically requires them to act not only on the system, personalizing data, procedures, and functions, but also on the organization, by changing the existing procedures to take full advantage of the system being introduced.
In this kind of project, understanding how work is carried out in an organization is often more relevant than eliciting the requirements of the system to be built, since an important part of the project work will focus on mapping the current procedures and changing them to accommodate those that supported by the ERP.
The activity to understand how an organization is structured and works is called business process modeling or business modeling in short. Those to modify the current procedures go under the name of business process re-engineering.
Business modeling and business re-engineering are usually organized in two main steps. An initial “as is” analysis describes the organization before the introduction of a new system. The “as is” analysis helps one understand the current infrastructure and needs. A complete analysis will include
Following the “is” analysis, a “to be” phase defines how the organization will change with the introduction of the new system. The “to be” analysis produces the same set of information required by the “as is” analysis, but it describes the processes, the systems, and the business data that will be introduced to make operations more efficient.
Let us see in more detail the information produced with the “as is” and the “to be” analyses.
Mapping the organizational structure has the goal of understanding how an organization is structured.
The information to collect includes the list of the different business units and the lines of responsibility. More detailed analyses also include the roles or the staff employed by each business unit and the functions assigned to each role or person.
The output is a text document or an organizational chart describing the units and their functions. It is used to identify the changes that will have to be implemented in the organization to support the new processes.
Modeling the business processes has the goal of documenting how an organization carries out its procedures.
These are typically represented with flow diagrams sketched, for instance, using the business process modeling notation—BPMN (OMG, 2011). Business processes highlight, for each process, which steps need to be performed, by whom, and what outputs are produced and consumed. The specification should model both nominal and exceptional situations. For instance, if the target of the analysis is a paper-based procedure to authorize a trip, a good process description will document what happens when everything flows as expected and how the organization recovers if some error occurs—for example, a paper form is lost in the middle of a procedure.
A difficult aspect of this analysis is capturing not only the formal procedures but also the current practices, namely, how people actually carry out the procedures. The ethnography software engineering field focuses on methods to simplify this activity. See, for instance, Rönkköa (2010) for an introduction on the matter.
The output is a document containing the processes, possibly organized by area or by business unit. It is the basis to specify the new business processes or the requirements of the systems that will have to implement them.
Mapping the IT infrastructure has the goal of understanding what IT systems are currently used in an organization, with what purpose, which data they store, and what lines of communications exist, if any.
Various notations can be used; the most formal ones are based on UML and could include component and deployment diagrams. Textual descriptions often complement the diagrams.
The output is a document. It is the basis to plan data migration and data integration activities. The former occurs when an existing system will be dismissed and the data it manages have to be migrated to a new system. The latter occurs when the system will remain in use and will have to communicate with the new system being introduced.
Mapping the business entities has the goal of documenting which data are processed by an organization, by whom, and with what purpose.
During this activity, analysts typically produce data models and CRUD matrices. The former list the data processed by the business processes. They are presented with class diagrams or textual descriptions.
The latter define the access rights to the data. It is presented as a matrix, whose rows list the data and whose columns list the business units. Each cell contains any combination of the CRUD letters to indicate which unit creates (“C”) specific data, which unit reads (“R”) it, which unit can update (“U”) the data, and which units delete (“D”) the data.
The goals of design (also system design or architectural design in the rest of the book) and implementation are, respectively, to draw the blueprint of the system to be implemented and actually implement it.
System design defines the structure of the software to build or system architecture. The output of this activity is one or more documents which describe, with diagrams and text, the structure of the system to build, namely: what software components constitute the system, which function each component implements, and how the components are interconnected. The activity is particularly relevant for technical and managerial reasons.
In fact, design allows one to break the complexity of building a system by separating concerns, that is, by allocating functions to components, and by specifying functions in terms of more elementary and simpler to implement components.
The system architecture can also be used as an input to plan development. In fact, given the list and structure of components that have to be developed, there is a natural organization of work that follows the structure of the system. We will see this in more detail in Section 3.3.
The definition of a system architecture can be based on a pattern or predefined blueprints. Many different architectural blueprints have been proposed in the literature. Among these, some of the most commonly used include:
Many web applications and many desktop applications use the data-centric architectural style.
Figure 2.2 provides a pictorial representation of the different architectural styles we have just presented. Notice how the data-centric architecture is composed of two MVCs.
A popular way of presenting the architecture of a software system is the one proposed in Kruchten (1995), which is based on the UML, and according to which the architecture of a software system is described by “4+1” diagrams.
Four diagrams describe the structure of the system. In particular
The last view is the use case diagram view, which we have briefly described in Section 2.1.
The goal of the implementation phase is writing the code implementing the components individuated in the architecture.
Some of the aspects of this activity that are more closely related to project management include
Verification is the set of activities performed on a system to ensure that the system implements the requirements correctly. The definition is taken from SAE (1996) and distinguishes verification from validation, which is instead performed to ensure that the requirements describe the intended system. Thus, validation ensures that we are building the right system, while verification ensures that we built the system right.
Verification and validation are collectively known by the acronym V&V. The main way of performing V&V of software systems is testing. However, also see Sections 2.1.4 and 4.3 for a more complete discussion.
Testing is one way of performing verification and validation. Other methodologies include simulation, formal validation, and inspections.
Testing activities can be classified according to their scope. In this case, we distinguish between the following:
While unit tests are written and executed by the developer writing the code being tested, integration and system test are typically performed by an independent team and are organized in the following two steps:
Many software engineering books also include a test planning activity, which has the goal of identifying the resources, the schedule, and the order in which the tests will be executed. This emphasizes the fact that testing can be organized as a subproject and have its own plans and schedule.
Starting from the requirements of a system, the goal of the test plan definition is to write the tests that will be performed on the system.
The output of this activity is a document listing a set of test cases, each of which describes how to perform a test on the system. Test cases are structured natural language descriptions that specify all the information needed to carry out a test, such as the initial state of the system, the inputs to be provided, the steps to be performed, the expected outputs, and the expected final state of the system.
Different test cases need to be defined for each requirement. Each test case, in fact, verifies either a particular condition specified by the requirement or the implementation of the requirement in different operational conditions.
Traceability information, which links a test case to the requirement it tests, helps manage changes and the overall maintenance process.
Starting from the test plan definition, test execution and reporting is the activity during which the team or automated procedures execute tests and report on the outputs, that is, which tests succeeded and which failed.
Manual test execution is time consuming and demands motivation and commitment from the people performing it. There are various reasons for this.
The first is that it is repetitive: some operations have to be repeated over and over again to put the system in a known state before starting a test. Attention, however, has to remain high to ensure that all glitches are properly recognized and reported.
The second is that testing is the last activity before releasing a system. It requires quite some commitment to work hard at this stage of development to demonstrate that a system does not work and needs one to go back to the design room.
The third is that when an error is found, the process stops till a fix is found. When the fix is ready, it is necessary to start the testing activities all over again, possibly including the definition of new test cases, to verify that the bug has actually been fixed. This is to ensure that there are no regressions, that is, working functions have not been unintentionally broken by the fix. The corresponding workflow is shown in Figure 2.3.
Different strategies have been proposed to write effective test cases. It has to be remarked, however, that testing is rarely complete and it can only demonstrate that a system does not work, rather than proving that a system is correct.
The final step of the development process is releasing and installing a system so that it can be used by the customer and operations start. The transition to operations can be very simple for the project team. Consider, for instance, a case in which a software is handed to the client as a self-installing application in a CD, or made available on a website for customers to download.
In other situations, deployment needs to be carefully planned. This happens when a new software system replaces an obsolete system performing business-or mission-critical functions. In this situation, the goal is to move to the new technology without interrupting the service.
Consider a case in which a system controlling the routing of luggage in an airport needs to be upgraded. The development and installation of the new version of the system has to be organized so that no interruptions occur and no luggage is mismanaged.
A standard practice for projects of this kind makes sure that any change or software evolution does not interfere with production. To achieve this goal, the project team sets up three exact and independent replicas of the same operating environment, as shown in Figure 2.4. In particular, we distinguish between the following:
Even if we separate development and testing from production, alas, it is still necessary to ensure a smooth transition of operations when the new system is ready. In general, three factors need to be taken into account when deploying a new system.
They are
The deployment process thus typically requires to perform an assessment of readiness and evaluation of the gaps, which has the goals of understanding the main criticalities and risks. An analysis of documents and interviews with project stakeholders highlights all the critical issues related to the deployment of the new technology.
This is followed by the selection of a migration strategy, which defines an approach to the introduction of the new system. According to Wysocki (2011), the following approaches are possible:
Notice that, in all the approaches mentioned above, with the exclusion of the cut-over approach, appropriate measures have to be taken to maintain or transfer data from the old system to the new system. For instance, in the piloting approach, adequate procedural or technical interfaces need to be defined, so that the data produced by the business unit operating the new system can be used by the units using the old system.
When the strategy is agreed upon, the final step is the implementation of the release process, which in turn consists of the following steps:
Notice that data migration and the installation of the new system need to be performed contextually. They are typically performed in a period and time where a service can be interrupted and system can be taken off-line to reduce pressure on the team and risks, should something not go as expected.
Operations and maintenance include the activities to ensure that a product remains functional after its release.
In general, operations are outside the scope of a project. However, many one-off development projects plan a support activity after a system is released to ensure that the project outputs meet the quality goals and the transition to operations is as smooth as possible.
The goals of this activity typically include
Maintenance occurs throughout the lifecycle of a system, before it is retired. It can be framed either as a project or operational work and, as such, it often poses a dilemma to the project manager.
ISO/IEC (2006) identifies four categories of maintenance for software:
Of these, perfective and corrective maintenance are triggered by suggestions and bug reports sent by users. Suggestions and bug reports are also called issues or tickets.
When maintenance is the last activity of a project, two points have to be considered. The first is how much work has to be allocated, since we do not know in advance how many defects will be signaled. A general strategy is considering the complexity of the system, looking at the outputs of the testing phase, and allocating a percentage of the overall development effort. The second point is distinguishing between tickets that are in the scope of the project (called “nonconformance reports”) and tickets that are outside the scope of a project (called “concessions”). In fact, as users start using the system, they might come out with new ideas and proposals. However, the implementation of these new features is often better framed within the scope of a new project.
When the planned maintenance period ends, tickets might still arrive. In these situations, organizations and managers are faced with the dilemma of whether the activities should be framed in the context of a new project or not. In some cases, in fact, the amount of work required for the fixes does not justify setting up the machinery of a project. The choice, of course, can boomerang if a continuous stream of small change requests keeps coming in or if the fixes turn out to be more complex than initially envisaged. While there is no silver bullet to decide on the matter, one good practice is to have the team always monitor the time they spend on maintenance activities.
Agile methodologies, by contrast, blur the distinction between development and maintenance by organizing the development of a system in iterations. Each iteration includes the development of new planned features and selected tickets identified since the last release. This will be explained in more detail in Chapter 7.
One important aspect of support and maintenance activities is keeping formal track of the tickets.
This is usually achieved by
Figure 2.5 shows an example workflow, adapted from the Bugzilla Development Team (2013). A bug starts in the state unconfirmed after it is reported by a user. If the quality assurance team confirms its presence, the bug goes in the state confirmed, where it can be taken in charge by a developer. The state of the bug thus moves to the state in progress. When the developers consider the fix to be adequate, he or she sets the state to resolved. V&V by the quality assurance team, finally, determines whether the solution is satisfactory, in which case the bug is closed. Alas, if V&V determines the fix is not adequate, the bug returns in the state confirmed and another solution has to be found.
Booch, G., J. Rumbaugh, and I. Jacobson, 1999. The Unified Modeling Language. Addison-Wesley, Boston, MA, USA.
Bozzano, M. and A. Villafiorita, 2010. Design and Safety Assessment of Critical Systems. Boston, MA: CRC Press (Taylor & Francis), an Auerbach Book.
Brader, S., 1997. Key words for use in rfcs to indicate requirement levels. Request for Comments 2119, Network Working Group. Available at http://www.ietf.org/rfc/rfc2119.txt. Last accessed May 1, 2013.
Bugzilla Development Team, 2013, March. The Bugzilla Guide—4.2.5 Release. Bugzilla. http://www.bugzilla.org/docs/4.2/en/html/index.html. Last retrieved November 15, 2013.
Cambridge University Press, 2013. Cambridge Advanced Learner’s Dictionary & Thesaurus. Cambridge University Press, Cambridge, England. Available at http://dictionary.cambridge.org/dictionary. Last retrieved May 1, 2013.
Clarke, E. M., O. Grumberg, and D. A. Peled, 2000. Model Checking. MIT Press, Cambridge, MA, USA.
Fowler, M. and K. Scott, 2000. UML Distilled (2nd Ed.): A Brief Guide to the Standard Object Modeling Language. Boston, MA: Addison-Wesley.
Free Software Foundation, 2013, April. Gnu coding standards. Available at http://www.gnu.org/prep/standards/. Last retrieved May 1, 2013.
Gotel, O. C. and A. C. W. Finkelstein, 1994. An analysis of the requirements traceability problem. In Proceedings of ICRE94, 1st International Conference on Requirements Engineering, Colorado Springs, CO: IEEE CS Press.
International Organization for Standardization, 2010. Ergonomics of human-system interaction. Technical Report 9241-210:2010, ISO.
ISO/IEC, 2006, September. Software engineering—software life cycle processes— maintenance. Technical Report IEEE Std 14764-2006, ISO/IEC.
Kruchten, P., 1995. Architectural blueprints—The “4+1” view model of software architecture. IEEE Software 12(6), 44–50.
OMG, 2011, January. Business process model and notation (bpmn). Technical Report formal/2011-01-03, OMG. Available at http://www.omg.org/spec/BPMN/2.0/. Last retrieved June 10, 2013.
Rönkköa, K., 2010, November. Ethnography. Encyclopedia of Software Engineering.
SAE, 1996. Certification considerations for highly-integrated or complex aircraft systems. Technical Report ARP4754, Society of Automotive Engineers.
Spivey, J. M., 1989. The Z Notation: A Reference Manual. Upper Saddle River, NJ: Prentice-Hall, Inc.
Wysocki, R. K., 2011, October. Effective Project Management: Traditional, Agile, Extreme (6, illustrated ed.). John Wiley & Sons, New York, NY, USA.
13.59.139.220