© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
R. F. RoseSoftware Development Activity Cycleshttps://doi.org/10.1007/978-1-4842-8239-7_10

10. Final Remarks

Robert Rose1  
(1)
Alexandria, VA, USA
 

Types of Software

Software does not fit into one category.

There are Embedded Systems, such as robotics, the Internet of Things (IoT), Fly by Wire in aircraft control systems, and voice recognition, where controls and “programs” are burnt into chips. These are at the heart of “Data Processing.”

There is Enabling Software including Operating Systems, LAN software such as Novel, and Relational Database Management Systems (RDBMS) such as Oracle, and packaged frameworks such as .net.

There are Products, including Office Productivity software such as MS Word, WordPerfect, Excel, or Adobe Acrobat, SAP and there is packaged Application Software such as Entertainment, Games, and other application packages such as Song Surgeon, and mobile device APPS.

And there are Information Systems including Management Information Systems (MIS) Decision Support Systems, and Warehouse systems each requiring programming effort. Of these, there are Systems of Record (SORs) and customer-facing Systems of Engagement (SOE) such as websites or mobile devices explained earlier in the text. The interpretation of DPAC contained herein is for a System of Record.

Types of Implementation

First, there are “green fields” projects, where there is no existing system and the implementation can proceed from scratch. Then, there are projects that must deal with legacy systems:
  • Upgrade (modernize) the legacy system for intranet or Internet access.

  • Consolidate several SOR systems into one.

  • Create a mobile access capability.

  • Add a data warehouse.

  • Attach a new service to an existing platform (e.g., add an Inventory service to a customer relations/order processing front end).

  • Interface with COTS packages.

  • Replace the legacy system with a more efficient bespoke system (e.g., convert a mainframe application written mostly in COBOL with an RDBMS - Relational Database Management System such as Oracle application).

While retirement seems inevitable, there are systems that are still going strong through decades of use. Legacy systems may need to be enhanced to enjoy the delights of user-facing web and mobile experiences.

DPAC Activity Cycles

The each Activity Cycle of DPAC represents a back and forth between implementation personnel and the client (end users). If the ellipses are rotated 90 degrees, one would observe a straight line, and movement through the cycle would be represented by an up-and-down interaction between developers and clients from the organization.

DPAC distinguishes itself from other development models in that it represents the movement of people rather than the progress of software although the evolving software progresses (by human effort) along the same lines. The DPAC model is a shadow of activity in the real world.

The DPAC model is designed to address the emergence of functional requirements throughout the development process, beginning with a Vision Statement and continuing into the Support Cycle. To that end, the cycles of activity in the DPAC model are represented as progressive, recursive (re-entrant), nested contiguous ovals; each activity cycle is overlaid with an interpretation of the Deming quality control cycle phases of Plan, Do, Check, and Act (PDCA). Each “Check” Phase has at least one part consisting of “review by User.” If the user is not satisfied at that level, the activity cycle repeats until that is accomplished. Where the user approves, activity advances to the next cycle.

In DPAC, user feedback drives the development effort.

For example, the Process Detail Cycle repeats until the user is satisfied with the result at which time activity advances to the Unit Development Cycle. If, for some reason, criticism of the Unit Development shows poor understanding of the client’s needs (or the User has simply changed his mind), activity may circle back to the Process Detail Cycle or even back to the Process Overview.

Staffing

Different personalities and experience bring different points of view to the table. While there may be as many exceptions as there are to the rule, a good business analyst has the persistence to get to the bottom of a user’s intent not for the faint of heart and has the modeling skills (such as creation of a DFD) used to describe the process. When the client interviewed is not available to the programmers, a business analyst may be able to step in as a surrogate. In DPAC, in the Process Overview Cycle, two business analysts interview department heads (in triad) to outline the basic processes of the implementation field rendered in a DFD. Where there are multiple systems that are to be consolidated, a DFD should be created for each (by individual pairs of analysts), and an overall consolidated DFD created as a projected solution to duplication.

Programmers are divided into those who like to build from scratch (developers) and those programmers who support the system. Testers, in my experience, are a breed apart. They may know how to read code, but are definitely not interested in development programming. They like to dig in and see how well the item of code meets requirements, and is stable, free of error. In this brand new era of comprehensive testing automation, however, testers must learn the basics of programming test automation tool(s).

Tester and Programmer Pairs

Following the production of a comprehensive DFD, DPAC pairs a tester and a programmer to uncover the needs of the client by interviewing prospective clients of the system to be applying the triad principle. There are several very good reasons for this pairing:
  1. 1.

    Each brings a different point of view to the problem of rendering the intent of the user into code.

     
  2. 2.

    Tool automation requires from 2:3 to 1:5 (or more) lines of code (LOC) than the unit code set down by the programmer/developer. Working together shortens the tester’s time overall to program the tool as opposed to programming the tool after the programmer has completed his part of the task.

     
  3. 3.

    The perspective of the tester can provide a remedy to errors in code not apparent to the programmer. Early prevention is the cure.

     
  4. 4.

    Each test must be created manually prior to automation; the symbiosis of working together facilitates that effort.

     

It may be true that the process of writing unit code will take longer than a programmer working alone, but the overall advantage of helping to create bug and error-free code saves considerable cost down the line.

The matching of programmers to testers can begin in the hiring process. Each candidate can be questioned regarding her willingness to work with the other side. DPAC, in this interpretation, uses the lead programmer and the backup lead tester to interview prospective programmers. The lead tester and backup lead programmer interview prospective tester candidates. Some effort can be made in the final interviews to hire compatible personnel. (Likes repel; opposites attract.) Given the proclivities and differences between programmers and testers, compatibilities may be easier to find given, of course, the willingness of each to give triad development with a tester a try. Call it the “willing suspension of disbelief.”

On Tools

There are many roles for tools in the development process other than tools for the automation of software testing. Load and performance testing should be automated. Data must be generated for test. Other functions assisted by application software include but are by no means restricted to:
  • Facilitating group communications (including Wiki for collecting and disseminating information and document control)

  • Controlling software versions as a part of a larger framework for configuration management

  • Maintaining the compendium of business rules

  • Carrying out database refactoring

  • Managing the handling of Change Reports (CRpts) and Discrepancy Reports (DRs)

Regarding Tools for Automated Testing

There is a significant difference between a customer-facing application; Business to Customer (B2C), such as a website, IoT (Internet of Things), or a mobile application (APP); and a Management Information System (MIS) facing inward for internal organization use. The most significant difference is an MIS system uses inward-facing screens for input and screen based or hardcopy reports as the primary output. To put it another way, B2C applications are defined up front and worked back to implementation from the outside in. Full requirements for Systems of Record (SORs) such as an MIS evolve over time and are developed from the inside out. Further, in part the value of an SOR is that it enables ad hoc inquiry.

It is absolutely essential that software for B2C applications gets out the door as rapidly as possible to be competitive in the marketplace. These applications get response from users after they are released, and tend to have a restricted user interface. The development of Systems of Record relies on feedback from users within the organization and may encompass many different services and interfaces. In the initial process of development, internal users are consulted in each iteration of the Process Overview Cycle in the Elaboration Stage and in the cycles of the Construction Stage.

Another consideration regarding automated testing for MIS development efforts is the fact that the database structure also evolves over time – through all the development cycles. Scripts for automation of functional requirement testing may have to be created or modified for each recursive iteration, still less time than it would take to perform the tests manually.

A test plan would have to be prepared for either automated or nonautomated test. Even if the same test personnel are used for each progressive iteration, the time interval between manual testing and preparing or altering a test-tool script is significant. As stated earlier, the ratio of lines of program code (LOC) to automated test scripts is 2:3 or 1:5 or more. That is, 1000 lines of code can require up to 5000 lines of test script or more. The reason to use test automation scripts is to make the process of test reuseable for regression testing for changes made to the original unit code.

The Tool-Scape Is Changing

At this writing (2022), the tool-scape is exploding with new entries and consolidations in every category. Between January 1917 and February 1918, there were $2,337,000,000 to 41 companies (and 11 undisclosed amounts) invested by venture capital into the tools market crowned by $165,000,000 to Split Software in January 2017 and $100,000,000 to XebiaLabs in February 2018 - a market expected to reach $8.8 Billion by 2023.

Source: Aaron Walker. “Agile Development: The State of DevOps Tools in 2018,” g2crowd/blog Technology Research, February 21, 2018 (retrieved 8/16/2018) (Google)

The venture capital pouring into this sector means that a number of organizations will be absorbed by the fortunate few. When that happens, what becomes of the users of the tool absorbed?

I have judiciously avoided the mention of specific tools, only indications where a category of tool (particularly administrative and support tools) may apply first and foremost because I have no firsthand knowledge nor hands on experience with any of the tools in use at this writing and, second, because the tool-scape is changing and will be changing so dramatically that whatever I might have to say will be out of date before this treatise comes to print.

However, I will make three observations:
  • A chain is only as strong as its weakest link. Beware of “all and everything” tool chains.

  • You will spend a lot of time and treasure discovering what a tool will not do.

  • “Best of breed” tools are the best if they can “play well with others” (i.e., they offer enough of an open architecture to allow coupling with other best of breed tools).

No Silver Bullet (Brooks)

Tools promise to lower costs – but none of these are free. Even “freeware” has a learning curve, and training takes time and costs money. Further, no tool will turn a bad design into a good one and finally:

Automation of an inefficient process automates inefficiency.

Each tool is contained within a methodology – the more “integrated” the tool, the more it is dependent on a particular approach to doing business. Also each tool has its limits; the users of the tool will spend a great deal of effort discovering what the tool does not do. In some cases, this may mean abandonment of the tool (which compromises the integrity of the effort) or squeezing reality to fit the confines of the tool.

Any tool must support the integrity of the Vision, as well as the design and implementation thereof.

Categories of Essential Tools

That being said, there are categories of essential tools that will facilitate a “lite” development effort as described herein.

Drawing tools:
  • Produce readable data flow diagrams and flowcharts

  • Produce a diagram of the database design (preferably in IDEF1X or IE)

Configuration management (CM) tools:
  • Manage business rules and requirements

  • Maintain the traceability matrix

  • Track the progress of Change Reports and Discrepancy Reports

Version control tools:
  • Manage software version control

  • Manage data and database version control

  • Maintain version control of test scripts

Process automation tools:
  • Automate software “builds”

  • Automate software test

Communication and collaboration tools:
  • Facilitate document circulation (e.g., a Wiki for standards, guidelines, and procedures)

  • Support face-to-face communication for working from home (WFH)

CM tools should be able to identify the business analyst that recorded the business needs, and the individual programmer (or programmer/tester team) that is responsible for the completed code. This accountability facilitates the progress of test – where a Discrepancy Report can be directed to the originator of the code, thus expediting resolution.

Dr. Winston W. Royce vs. the “Waterfall”

The earliest use of the term “waterfall” may have been in a 1976 paper by Bell and Thayer. Software requirements: Are they really a problem? TRW Defense and Space Systems Group, Redondo Beach, California, also published in Proceedings of the 2nd international conference on Software engineering. IEEE Computer Society Press, 1976. (Wikipedia)

The word “waterfall” does not appear in Royce’s paper: Managing the development of Large Software Systems. Proceedings, IEEE WESCON, August 1970, pages 1-9 (originally also published by TRW). In fact Royce was a firm critic of this approach.

First launched in 1970, TRW built all 23 reconnaissance satellites in the Defense Support Program (DSP), which are the principal components of the Satellite Early Warning System currently used by the United States (Wikipedia). At that time (1970), the models for development of software followed the process of manufacturing “things,” as indeed was Royce’s interpretation. This simile was maintained for the next 30 years even to this day, although each succeeding model of the software development process brought something new to the table as computing began to turn from “Data Processing” to Information Processing.

Royce was, however, the first to use this particular set of labels for the steps depicted in Figure 10-1.

A staircase diagram for the Processing of Information. From top to bottom, the steps are System Requirements, Software Requirements, Analysis, Program Design, Coding, Testing, and Operations.

Figure 10-1

The cascading model

With regard to this process, he relates:
  • “I believe in this concept, but the implementation described above is risky and invites failure. The problem is illustrated in [Figure 10-2] below”

A staircase diagram for the actual life processing of Information. From top to bottom, the steps are System Requirements, Software Requirements, Analysis, Program Design, back to Software Requirements, then Coding, Testing, back to Program Design.

Figure 10-2

Reality check

Royce explained skipping over the analysis and coding steps of the model:

One cannot, of course, produce software without these steps, but generally [speaking in terms of the satellite manufacture support environment] these phases are managed with relative ease and have little impact on requirements design and testing.

He wrote referring to amended model (Figure 10-2):

... I believe the illustrated approach to be fundamentally sound. The remainder of this discussion presents five additional features that must be added to this basic approach to eliminate most of the development risks.

These five steps are cogent to this day, regardless of the model employed: an abridged version follows, my emphasis in italics:

STEP 1: PROGRAM DESIGN COMES FIRST
  • Write an overview document that is understandable, informative and current. Each and every worker must have an elemental understanding of the system. At least one person must have a deep understanding of the system which comes partially from having had to write an overview document.

STEP 2: DOCUMENT THE DESIGN
  • Each designer must communicate with interfacing designers, with his management and possibly with the customer. A verbal record is too intangible to provide an adequate basis for an interface or management decision.

Recall that Royce is referring to software to support a satellite installation where the customer is a group of scientific analysts probably not culpable for program error. However, the other parties are members of the development team, and need to be kept in the loop. The remark regarding the need for more than a verbal record is true today:
  • The real monetary value of good documentation begins downstream in the development process during the testing phase and continues through operations and redesign.

STEP 3: DO IT TWICE
  • If the computer program in question is being developed for the first time, arrange matters so that the version finally delivered to the customer for operational deployment is actually the second version insofar as critical design/operations areas are concerned.

This was similarly noted by Frederick Brooks in The Mythical Man-Month (1975) as “Plan to throw the first one away, you will anyway.” In DPAC, the software is reinvented in every iteration of every activity cycle, progressively strengthened over time.

STEP 4: PLAN, CONTROL AND MONITOR TESTING
  • Without question the biggest user of project resources, whether it be manpower, computer time, or management judgment, is the test phase. It is the phase of greatest risk in terms of dollars.

  • Most errors are of an obvious nature that can be easily spotted by visual inspection. Every bit of an analysis and every bit of code should be subjected to a simple visual scan by a second party who did not do the original analysis or code.

STEP 5: INVOLVE THE CUSTOMER
  • For some reason what a software design is going to do is subject to wide interpretation even after previous agreement. It is important to involve the customer in a formal way so that he has committed himself at earlier points before final delivery. To give the contractor free rein between requirement definition and operation is inviting trouble.

This is ameliorated in DPAC by progressive iteration of activity cycles uncovering requirements up to and including deployment.

And most relevant even to today’s activities whatever the model:

SUMMARY

I [Royce] would emphasize that each item costs some additional sum of money. If the relatively simpler process without the five complexities described here would work successfully, then of course the additional money is not well spent. in my [Royce] experience, however, the simpler method [Figure 10-1] has never worked on large software development efforts and the costs to recover far exceeded those required to finance the five-step process listed.

Source: Abridged from Winston Royce “Managing the Development of Large Software Systems,” Proceedings, IEEE WESCON, August 1970, pages 1-9

It is still a “hard sell.”

Royce was indeed the first to label the parts of the cascading model in a form that has become common today, falsely enshrined as steps in the Software Development Life Cycle (SDLC) paradigm.

The five steps, however, are still valid, and significant advice, and that, to me, is the enduring value of his article. Pity they are mostly overlooked.in describing the paper.

Kudos

Royce’s paper is included in Harry R. Lewis ed. Ideas That Created the Future: Classic Papers of Computer Science: Classic papers by thinkers ranging from Aristotle and Leibniz to Norbert Wiener and Gordon Moore that chart the evolution of computer science. MIT Press 2021

This work collects 46 classic papers in computer science that map the evolution of the field. Royce is Number 33.

Conclusion

Our profession is rife with misunderstandings, misinterpretations, misapplication, and lack of a viable model for the development of information systems (as opposed to data processing). I am putting DPAC forward as an independently generated model that resolves many of the problems associated with our understanding of the software development process appropriate for agile and lean methods. If nothing else, I hope it will serve as a source for heuristic discussions of  these issues. Again - Onward through the fog!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.81.94