Chapter 7. Automating the Agile ALM

Application lifecycle management (ALM) touches every aspect of the software and systems lifecycle. This includes requirements gathering, design, development, testing, application deployment, and operations support. Automation plays a major role in the agile ALM, which sets it apart in many ways from other software development methodologies. Since you simply cannot implement all of these practices at once, you will need to assess the scope and priorities for your team. That said, it is essential to appreciate the big picture at all times so that the functions you implement are in alignment with the overall goals and structure of your ALM.

7.1 Goals of Automating the Agile ALM

The goal in automating the application lifecycle management (ALM) is to ensure that processes are repeatable, error free, and executed as quickly as possible. There is also a goal of being able to run as many of these processes without human intervention as possible and appropriate. For some companies, the automation acts almost as a robot build engineer who never needs a vacation. Automation also provides a suitable framework for writing automated tests to support the ALM, improving quality and productivity and avoiding costly mistakes and rework. In Chapter 20, we discuss continuous testing which is a must-have for DevOps. In fact, many experienced professionals would say that you are likely to fail, if you do not automate.

7.2 Why Automating the ALM Is Important

Automating the ALM is important because tools, such as workflow automation, significantly ensure that your processes are repeatable and traceable. Workflow automation also helps facilitate teamwork because these tools enable each stakeholder to understand what they need to do on a daily basis, as well as the status of requests that depend upon others. When organizations use well-designed workflow automation, team members can easily ascertain the status of a request and employ formal or even informal channels to get tasks completed. ALM automation helps avoid costly mistakes and rework while providing a ready source of information on the status of requests. Even more importantly, these tools have the effect of forming a knowledge base so that everyone understands the processes, including requirements, steps for completion, and criteria for verification and validation.

7.3 Where Do I Start?

We always start by observing and learning as much as possible about the existing processes, making no assumptions about what may or may not be broken. It has always been our experience that most teams are doing some things very well, even though it may not be the approach that I would personally recommend. It rarely makes sense to fix things that are not perceived by the team as being a priority. Initially, we usually write scripts to automate processes with a strong focus on eliminating human error. My first scripts often require an operator read the screen and then hit Enter a few times. We call this approach attended automation, because the first generation of scripts often requires some operator intervention, or at least hand-holding. Although the long-term goal is to have scripts that can run without human intervention, beginning with this approach often goes a long way toward avoiding human error, improving productivity, and speeding up the overall deployment process. This is a great example of where 20 percent of the effort often gives you 80 percent of the value. Getting started in this manner is both pragmatic and, in my experience, very effective. We will discuss the importance of both process and tools.

7.4 Tools

Obviously, tools are essential to automation. But our discussion of tools still begins with the question of whether they matter. Then we consider whether process is more important than tools, tools in the scope of ALM, and commercial versus open-source tools

7.4.1 Do Tools Matter?

Application lifecycle management places a much stronger emphasis on tools than most other software and systems methodologies. Integrations between tools are especially important because the ALM has a very broad scope, from early requirements gathering to project management through the entire software and systems lifecycle. ALM tools should facilitate the completion of all required tasks, and this is exactly where workflow automation can be extremely valuable. Workflow automation provides much-needed structure to support the entire development effort. Determining the workflow that you need is not always easy, and sometimes it is necessary to model your process to determine the optimal flow of steps, including tasks, approvals, decision points, and milestones. A lot is required to successfully implement workflow automation and any other tool supporting the ALM as well. But what about those process improvement professionals who suggest that the process is more important than the tool? Are they wrong, or should we really just focus on the process?

7.4.2 Process over Tools

Process improvement experts have long maintained that getting the process right is a lot more important than picking the right tools. Many of my colleagues lecture me that I should be more concerned with the process than the tools. I used to believe that too. But after many years of implementing business and technical processes, I have come to the opinion that tools are just as important as getting the process right. In fact, both are must-haves if you are to be successful. Many tools come with well-thought-out process models that vendors have developed with input from their extensive customer base. If you select the right tools, then you also get world-class processes straight out of the box. The process of picking the right tools also helps you think through the processes that you want and need.

7.4.3 Understanding Tools in the Scope of ALM

Tools and automation are essential throughout the entire ALM, starting with workflow automation to ensure that processes are repeatable, traceable, and fully enforced. But that is not all. The agile ALM starts with requirements, typically described in user stories and often supplemented by test cases. We typically focus most of our energy on automating the application build, package, and deployment. But the true scope really is the full lifecycle, as we will explain throughout the rest of this chapter

7.4.4 Staying Tools Agnostic

We find that many teams can succeed with almost any toolset that aligns with their technical requirements and their culture. With a few exceptions, we are largely tools agnostic (caveat: bad tools do exist, including some that lose code and should never be used). You should always start by thoroughly understanding your requirements and conduct at least a proof-of-concept (POC) whereby you ensure that the tools can meet at least your basic requirements. Teams often fail with tools because they buy the first shiny toy that they see. Just as bad is the rigid view that open-source (essentially free or very low cost) tools are the only acceptable solution. Some commercial tools are worth paying for, and many vendors do indeed provide significant value, which justifies the purchase cost.

We strongly advocate a structured evaluation and bake-off between at least two or three leading tools vendors in whatever area is being considered. In large companies, we strive to ensure that there is transparency and an openness to input from different members of the team. Although this needs to be bounded because some team members can be very opinionated and rigid when it comes to tools and the technical direction for automation in general.

It has been our experience that large organizations are not always successful at getting down to only one toolset. There may be good technical reasons for why they need to support more than one set of tools. Even more importantly, the culture of different teams may demand different toolsets. If you force them to switch, these folks will typically spend an enormous amount of effort proving that you were wrong in taking away their preferred toolset and often blame every problem, delay, and challenge on the “bad” toolset that you forced on them.

7.4.5 Commercial versus Open Source

We have worked with technology professionals who had very strong opinions about whether or not open source is a better approach than using commercial tools. We have seen teams be very successful with open-source toolsets, and that is often because their culture really embraced open source. Some of these folks viewed open source as being essential for their future career aspirations. We have also seen folks who felt that as long as they got money to buy a particular tool then all of their problems would be solved.

The choice of whether to use commercial tools or open source often comes down to a balancing act between the budget that you have and the features that you need in order to get your work done. When commercial tools are deemed valuable, many organizations can come up with the budget to purchase them, although all too often they may lack the additional resources to properly implement and support the tools that they have chosen. We believe that it is essential to understand your requirements and the total cost of ownership before choosing a commercial tool or an open-source solution.

We also see some teams that start with open source but then quickly outgrow the limited features that they provide. This is not necessarily bad, as the experience of implementing low-cost or free tools can be very helpful in determining the necessary features that would otherwise not have been readily apparent. We see some teams where they use low-cost tools for developers but then purchase more robust solutions for their build and release engineers, a suitable solution for some companies. Regardless of the final decision, in all cases you need to start by understanding exactly what you are doing today.

7.5 What Do I Do Today?

The first step in any process improvement effort should be to assess what the organization is currently doing. Because it is almost certain that agile ALM will require you to make some changes to the way that you are doing things, it is essential to start by assessing your existing practices. We do many assessments for large and small organizations in order to help identify opportunities for improving their processes. The first step is to identify the stakeholders who will be interviewed in what we like to call the “dance card.” We always warn that the dance card grows once word gets around about what we are doing. Typically, our meetings are with individuals or small groups, and we ask participants to explain what is going well and what could be improved. We then compare the responses to the guidance found in industry standards (e.g., IEEE, ISO, EIA/ANSI) and frameworks (e.g., ITIL v3, Cobit). Participants generally are very willing to volunteer what should be improved. Often they have been trying to express their ideas for years and, in larger organizations, we often find that individuals do not even realize that there are others with the same view in other parts of the organization. The assessment becomes the catalyst for change by identifying the important initiatives that need to be at the forefront of the process improvement initiative.

You can conduct the assessment yourself, although having an outside consultant has some strong advantages. First an outside consultant is free from organizational history that may give the appearance of bias or even political motivation. Participants are often more open to expressing their opinions to an outsider, especially if he (or she) is also regarded as an expert. Assessing to industry standards and frameworks is also essential because just relying upon one person’s experience is obviously less than optimal. The results of the assessment help identify where to begin the process improvement effort. Usually this starts with automating the workflow.

7.6 Automating the Workflow

The workflow identifies the tasks to be completed, identifies who is responsible for completing them, and establishes decision points within the ALM. This approach eliminates many sources of common errors and helps reduce the inherent risk in any project. For example, workflows help ensure that you are not dependent upon one specific person, often a subject matter expert (SME), which is commonly known as keyman risk. When we automate workflows, we usually start by observing the overall activity within the group and then we interview each of the stakeholders. What is interesting is that sometimes experienced professionals cannot actually tell you what they have been doing in many complicated workflows because there are often many decision points and exceptions to the rules, which require judgment and considerable business knowledge. In doing this type of work, we are often helping these folks get some structure around what they have been doing via their own extensive tribal knowledge for years. Sometimes we discover mistakes that have been made, and obviously getting the process actually documented helps to eliminate keyman risk, while often allowing these colleagues to actually take a vacation without being interrupted for the first time in years.

Don’t expect to be able to identify all of the steps in a workflow the first time that you try. This effort often takes a few iterations and generally requires a strong DevOps approach, including input from multiple stakeholders. In addition, processes often change—perhaps even on a seasonal basis. You may find that teams have special rules to handle emergency fixes when it is at the end of a financial quarter when deadlines must be met, although accuracy is an absolute must-have. The best way to handle this effort is to make use of process modeling software.

7.7 Process Modeling Automation

Modeling a process is almost impossible without the right tools. This is where visual modeling is essential, and the best tools allow you to create a model and then interactively update it as needed. Some tools create the visual model based upon a configuration file, whereas others allow you to visually model your workflow through a more intuitive visual interface.

Using automation to model your process helps communicate the steps to your workflow in a cognitively intuitive way. The agile ALM has many tools and approaches that you can use to manage the software and systems lifecycle.

7.8 Managing the Lifecycle with ALM

Managing the software and systems lifecycle can be challenging. The agile ALM provides the tools and processes necessary to manage the day-to-day activities within the lifecycle. Managing the lifecycle with the ALM begins with communicating the overall workflow, including a clear description of each of the required tasks, roles, and decision points to all stakeholders. There is no assumption that processes will not change within the ALM and, in reality, they often do change. But this approach puts structure around the workflow and helps us manage the required changes, including exceptions. Although we often focus mostly on application build, package, and deployment, the scope of the ALM is actually quite broad.

7.9 Broad Scope of ALM Tools

ALM tools should be used to manage the entire agile ALM, starting with requirements management, architecture design, and test case management, along with application build, package, and deployment. ALM tools have a broad scope and are essential for managing the software and systems development process. Each aspect of the ALM may involve different toolsets, or perhaps a complete ALM suite of tools from one specific vendor. Whether you are using a suite of tools from one vendor or different tools from an array of sources, it is essential to integrate toolsets into comprehensive toolchains. This is where creating seamless tools integration is a must-have.

7.10 Achieving Seamless Integration

Integrating tools is an essential, although often misunderstood, requirement. When we learn that a particular tool integrates with another tool, we generally ask what the integration actually does. Additionally, we need to understand what fields are linked and how that linkage is accomplished at a technical level. For example, many requirements tracking tools integrate with test case management solutions. The purpose of such integration is to ensure that each requirement is tracked to a test case, verifying and validating that the requirement was met in the features of the software. In an agile ALM, we should admit when the requirements are not fully understood, and with each iteration of the software, we partner with our business and product experts to understand exactly what the software should do. Tracking requirements to test case integration can help provide valuable information to developers who need to fully understand how a particular feature should function. Seamless integration in this context would involve being able to see the original requirement with the related test cases, which also describe expected behavior. We have seen many situations where the requirements description was completely correct, although less than clear and comprehensive. We often see that the test cases provide a valuable source of information describing the intended requirements, because well-written test cases describe user interactions and expected behavior in a clear and comprehensive way.

Defects that are found can also be linked to requirements and test cases, which can be very helpful to the analysts who are responsible for implementing changes to address defects found in the product, whether they be reported by your help desk or found within your own QA and testing process. Having seamless integration may also result in one consolidated defect-tracking system used throughout the ALM instead of the often-dysfunctional situation where there are multiple defect-tracking systems serving various systems—often with disparate descriptions, which can be confusing and less than helpful. Similarly, workflow automation tools can be integrated with other tools in the ALM to track pending tasks and completed work. This is especially important in the automation of the application build, package, and deployment, which we discuss in many chapters of this book. Integration is a particularly important aspect of requirements management.

7.11 Managing Requirements of the ALM

Requirements management has been largely maligned over the last few years by those who feel that these efforts simply result in large binders that sit on the shelf and are quickly outdated. The truth is that requirements management is best managed in modern systems that facilitate the thought process in working through exactly how the system should be designed and implemented. Documenting requirements is actually very important. Although the agile manifesto does indeed note that we value “working software over comprehensive documentation,” a mature agile process, discussed in Chapter 4, should document requirements sufficiently. It has long been our recommendation that well-designed test cases be written to supplement existing requirements documentation. It is absolutely true that requirements will evolve over time, and the agile manifesto also notes that we value “responding to change over following a plan.” As each iteration of the software is created and our product owners and business subject matter experts (SMEs) begin to more deeply understand how the system should ideally behave, our requirements understanding is likely to evolve. Outside industry forces, including competition from other firms, may also affect requirements. But modern requirements automation tools can help us understand and communicate these requirements. We also view test cases as being an ideal way to keep track of these details in a pragmatic and useful way. We also need to consider the best approach to documenting and clarifying the development work that needs to be completed.

7.12 Creating Epics and Stories

It has become popular to document requirements in terms of epics and stories. Writing good epics and stories can be challenging. We recommend that you have your user stories reviewed by SMEs for completeness and clarity. Well-written stories can help analysts understand the functionality that they need to support while providing a comprehensive design.

7.13 Systems and Application Design

Systems and application design depends upon a thorough understanding of the system requirements from not only a user functionality perspective, but also, more importantly, from a technical perspective. As architects, we need to be able to create comprehensive design documents that communicate both the system and the applications design and implementation details. Many great tools help to document and communicate comprehensive designs that can also be linked to requirements and test cases, which help ensure that the designs are fit for purpose and fit for use. This means that the system should behave as intended and as needed for practical use. Good design is a fundamental aspect of creating quality systems and applications. Another aspect of this effort is to create code of sufficient quality. Automation to inspect and analyze code quality often depends upon code quality instrumentation.

7.14 Code Quality Instrumentation

Code quality is essential, and many tools can be helpful in instrumenting and analyzing applications to identify opportunities to improve code quality. Instrumenting code means that you provide a runtime environment and possibly a separately compiled version of the code in order for the code quality tools to successfully provide useful information. Building variants of the code was discussed in Chapter 6. These capabilities are essential for automating the agile ALM. Testing the code is essential, but you should also remember that the software and systems lifecycle needs to be tested as well.

7.15 Testing the Lifecycle

Testing should not only focus on the application, but also the software and systems lifecycle itself. We have seen many situations where the lifecycle itself did not work as expected or needed. For example, all too often, each step is not completed or verified as required, or the process itself is not repeatable. One very common reason for these problems is when the workflow automation tools are hard to use and people become accustomed to working around them, which, instead of streamlining the process, can actually impede the success of the workflow. As users, we have seen many times when workflow automation tools require that the user select from a specific set of choices. This works fine until none of the choices make any sense or are just so hard to understand that it is too easy to make a mistake. When you implement a workflow automation tool, make sure that you test the automation itself, including the use cases and especially the user interface.

Similarly, we come across build engineering tools that are not working correctly, including continuous integration (CI) servers. This may be because the team responsible for supporting the CI server does not know enough about the application that is being built. When this happens, they run the tool, but the results may not be very useful. We have seen CI servers rendered useless because the team handling the support of the tool did not really understand its usage. Testing the CI server itself can help ensure that your continuous integration process and tools are aligned and can help ensure the success of the agile ALM. Closely related is the need for test case management to support the ALM.

7.16 Test Case Management

Managing test cases is an important aspect of automating the agile ALM. It is much easier to write and manage excellent test cases when you have the right automation in place. We have participated in helping teams develop effective test cases and test scripts for many years. This effort is often an iterative process itself, and feedback loops such as incidents and help desk calls should be embraced as sources of new tests cases—especially to ensure that problems do not recur.

Establishing the right approach to testing requires the right cognitive mind-set. We have seen teams struggle with creating good test cases. There are many in the agile world who believe that QA and testing teams should be integrated into the scrum. We see value in having a separate QA and testing function that does not report to the development team, so as to avoid any undue influence or pressure to approve a release that, in fact, is not ready for production. However, colocating and embedding testers with developers is a great idea that facilitates knowledge sharing and communication.

There are two competing cognitive constructs here, and both are important. The first is that testers need some independence and should not be unduly pressured to approve a release that is not ready for promotion. The second is that testers need good technical knowledge, which they can acquire by interacting with the folks who wrote the code.

We all know that testing is everyone’s job, but helping the team test effectively can be very challenging indeed. We find that reviewing test cases and test scripts as code can be very effective in improving the quality of testing. We run these sessions as we would any code or inspection and use a DevOps approach to having the cross-functional team review these artifacts. Developers in these sessions will volunteer technical information, and the QA and testing team will help developers think of edge cases that they would otherwise have not considered. Developers should be part of the QA and testing process. We also benefit from taking a test-centric approach during the construction phase, a practice that has become known as test-driven development (TDD).

7.17 Test-Driven Development

TDD has emerged as an effective best practice wherein developers write automated unit tests before they actually write the code itself. Because the code is created first, the initial test harness “fails”—which is expected. Then the actual code is written and the test cases should pass if the logic was clear. In practice, we like test-driven development, but have seen many teams struggle with getting beyond the most limited coverage using this approach. Once again, even though good tools can help, test-driven development seems to have some inherent limitations. Although TDD could be improved in the future, we also see environment management and control as an area that has much growth potential.

7.18 Environment Management

Runtime environments have many dependencies, which are often not well identified and understood. We believe that this is a key area that needs much more focus in the agile ALM. Automation is fundamental to understanding, monitoring, and controlling runtime dependencies. Most teams fail to get beyond monitoring memory, disk space, and CPU usage. But complex systems today have many essential runtime resources that need to be understood and managed. This is where the DevOps approach is once again extremely important. Your operations team likely knows how to establish effective IT controls, but it is the developers, who wrote the code, who really understand the application dependencies and what really needs to be monitored. When application problems occur, developers quickly begin checking technical dependencies in an effort to identify the source of these problems. You should try to capture these steps because they are often exactly the environment dependencies that need to be monitored. We like to encourage good environment management from the beginning of the software and systems lifecycle. We see developers expertly evaluating environment dependencies in the development test environments, but rarely sharing that knowledge with the operations team once the application is promoted to user acceptance testing (UAT) and production. We discuss change management in Chapter 10. DevOps encourages “left-shift,” which means that operations should get involved early with managing the deployments to development test environments—long the sole domain of developers. When operations gets involved in the beginning, they begin to get the information that they need to automate the environment management process. Closely related is the need to baseline runtime dependencies, or as folks call it, creating gold copies.

7.18.1 Gold Copies

The term “gold copy” is reminiscent of a time when physical recordings were created by record companies. In IT, gold copy refers to the final release candidate that has passed tests and is approved for release to production. We use this term to refer to the code itself that has gone through an official build process and also the baseline runtime environment, which is essential to supporting the application. Gold copies should be maintained in a definitive media library (DML) and verified through an automated discovery process. We discuss the use of cryptography and embedding immutable version IDs into configuration items in Chapter 6, which is a requirement for any valid automated discovery process. The information that is discovered by the automation should be maintained in the configuration management database.

7.19 Supporting the CMDB

The configuration management database (CMDB) is a great tool that many companies have spent millions to implement, albeit often with very little return on investment. We see organizations purchase expensive database-driven CMDBs but then try to keep up the data through manual processes. If you don’t automate the discovery process, then your CMDB will be useless. Truthfully, this is not that hard to do. You simply need to have a good build, package, and deployment process that embeds version IDs into configuration items and manifests in the release packages. The rest of the effort is simply to “discover” the version ID in the CIs that are running in production, UAT, or another environment in a process that is known as a physical configuration audit. We discuss these procedures in Chapter 6, “Build Engineering in the ALM.” The CMDB is a valuable tool when it is designed well and kept up-to-date through automation. This is an excellent example of where development and operations must work together to ensure success. In fact, effective DevOps tools and processes are essential for driving the entire DevOps transformation.

7.20 Driving DevOps

DevOps depends heavily upon good tooling. Although many folks in the process improvement arena suggest that process is more important than tools, for those promoting DevOps, tools are first-class citizens. Automating the agile ALM is an essential aspect of driving successful DevOps. Without automation, DevOps would seriously fail to achieve its goals of improving communication between development and operations. Automation includes managing the workflow and, more importantly, the exchange of information between development and operations. Knowledge management and knowledge sharing are fundamental to driving DevOps adoption. We see far too many cases where development operates in a vacuum, excluding operations from decisions and the overall development process. This approach leads to dysfunctional behaviors that often adversely impact the company in many important ways. Automating the agile ALM directly improves the support of production operations and, of course, the operations team itself.

7.21 Supporting Operations

Workflow automation tools are fundamental to any successful operation function. Capturing and sharing knowledge is equally important and often overlooked during the development process. This dysfunctional behavior is bad for many reasons. First, the operations team is stuck with trying to play catch-up when they are brought into the learning process late in the game. This reinforces development’s contention that operations lack expertise and creates a self-fulfilling process. In our work, we advocate hiring strong senior engineers into operations to take responsibility for running production applications and, most importantly, including them in the beginning of the software and systems development lifecycle.

Some folks in the DevOps community advocate that developers take responsibility for running their own code. We view this approach as a remarkably poor idea for several reasons. First, developers are often not of a mind-set to focus on creating repeatable processes and reliability. We see many senior software engineers who possess the necessary expertise, experience, and demeanor to handle production operations. More importantly, there is something about the transfer of knowledge that occurs when development does an effective hand-off to operations that results in improved quality. During this process, developers suddenly recall what they forgot to document or even code that was not checked into version control (sitting locally on their own private laptops). Having developers support their own code also often results in keyman risk where there is a lack of institutionalized knowledge. The IT Operations function serves an important and distinctive purpose; it should be staffed with strong technical resources and involved from the very beginning of the software and systems delivery process. We also need to support our frontline help desk engineers.

7.22 Help Desk

The help desk is usually on the front lines of ensuring that customers are kept happy and rely heavily upon automation of the agile ALM. Help desk staff are often not given enough training and support to ensure they can effectively address customer concerns. Keeping the help desk advised on system outages and issues and giving them a robust workflow tool to capture customer concerns and quickly feed information back to operations and other stakeholders is essential. We have worked with help desk managers to ensure that they get a constant flow of updated information and also that reported incidents are reviewed by QA, testing, and development managers as needed. Closely related is the service desk.

7.23 Service Desk

Many companies call their help desk a service desk, but they are really two distinct entities. The service desk manages, well, services. ITIL v3 defines the service desk as a primary IT service within the discipline of IT service management (ITSM), intended to provide a single point of contact (SPOC) to meet the communication needs of both users and IT employees. Typically, companies use the service desk function to handle the flow of frequent routine requests such as password resets or service requests upon which everyone depends. It is common to have specialized service desks and escalation points to help address and resolve issues. The service desk relies heavily upon excellent workflow automation and knowledge management. Closely related is the essential function of incident management.

7.24 Incident Management

Any large-scale production environment is going to be challenged with incidents, some may be routine; others extremely serious. Incident management done well can “catch the football before it hits the ground” or, if not handled well, can actually be responsible for making situations far worse than they need to be. Incident management also relies heavily upon workflow automation and knowledge management tools. When incident management recognizes a pattern, they can help save the day by getting the appropriate resources involved and addressing the issue to minimize, and often prevent, customer impact. Often incidents require root-cause analysis, and this is the point at which problem escalation becomes essential.

7.25 Problem Escalation

Routine incidents can often be handled by well-established procedures, but sometimes the root cause of the issue, particularly for recurring problems, must be analyzed, and often this means that a wider escalation of resources is necessary. Problem escalation is most often associated with system outages and can be extremely costly if not handled efficiently. The agile ALM relies heavily upon automation to manage problem escalation for proper communication, workflow automation, and knowledge management. Incident and problem management systems are essential for ensuring an effective response when issues occur. Fortunately, serious outages don’t usually happen every day, but most large-scale projects do require daily coordination, and this is where project management is essential.

7.26 Project Management

The agile manifesto aptly notes that we value responding to change over following a plan, which is absolutely true. But large-scale projects will not be successful without good planning. The real difference is that agility admits when we do not fully understand all of our dependencies and therefore have to postpone some decisions until more information is available. Our Lean colleagues have coined the phrase “last responsible moment” to indicate that sometimes decisions should be postponed until enough information is available to make the best decision. We concur fully that responding effectively to change is far more important than following a plan, but the fact is that no one is going to give you a large budget without being assured that you have a comprehensive plan and can communicate both dependencies and goals in a clear and reliable way. Facilitating project management is the key concern of the organization that has become known as the project management office (PMO).

7.27 Planning the PMO

The project management office requires workflow automation and project management tools to successfully automate the project planning process. The right automation can alleviate the harsh workload that is often associated with project management and, more importantly, smoothly facilitate handling changes to the process when necessary. We see this as an area where many teams are failing miserably in implementing agility in large enterprise organizations. The PMO should facilitate the work between various stakeholders, accelerating DevOps principles and practices. We find ourselves constantly partnering with project managers to ensure that the right stakeholders are involved and that communication is Lean and effective. There is no place where this is more important than in planning for implementation.

7.28 Planning for Implementation

Implementation of a new release, or new system altogether, is the finish line that we are all working to cross together successfully. Like the conducting of an orchestra, implementation planning requires the coordination of an amazing number of actions from skilled resources. Getting everyone in tune and working together along the same cadence is no easy task. Like many aspects of the agile ALM, none of this coordination would be possible without the right tools.

7.29 Evaluating and Selecting the Right Tools

Tools selection is a sore point with us because we have observed too many organizations suffer from impulse buying after examining one shiny new tool, often shepherded along by a little slick salesmanship. Vendors are indeed often expert and do share best practices that have, in turn, been influenced by their customer base and years of experience. That said, we strongly encourage our colleagues to establish evaluation criteria up-front and then review at least two or three tools in the same functional space. When looking at products and vendors, you will absolutely find yourself updating your evaluation criteria, which is fine. But making decisions to purchase big-ticket items should be based upon due diligence—including a proof-of-concept (POC) and preferably a structured product bake-off. Another issue that we often see is a siloed approach to tools selection. Many companies purchase tools for individual teams without conducting an enterprise-wide evaluation, which would likely result in more thorough decisions along with better enterprise-wide pricing. Regardless of the direction you choose, it is essential to document how you will use your tools and train and support your team.

7.30 Defining the Use Case

We see many pragmatic situations where the best tools cannot realistically be implemented and used. Sometimes, this is because of a lack of budget. Often, it is because more powerful tools require much more training and ongoing support, which management refuses to pay for. When stuck with less-than-perfect tools, or even when implementing the best tools, it is essential to define how the tools will be used. This helps establish best practices and also the right processes, which may themselves evolve over time. We also find that training is the most important factor in successfully implementing tools.

7.31 Training Is Essential

We recognize the value of excellent vendor training, especially for those who will be responsible for supporting and maintaining the tools. But we also view training as a key corporate capability. You should develop your own training programs, tailored to your preferred way of using each specific tool. When training is done in-house by qualified resources, then the company benefits from not only spreading knowledge and best practices, but a preferred usage model as well. Successful tools implementation depends upon effective training. It also depends upon a strong relationship with vendors.

7.32 Vendor Relationships

Vendor relationships are essential. The right vendors understand that customer success and their own success are tightly coupled. We have had many long-term relationships with large (and small) vendors who value input from their customers and are committed to spreading industry best practices. This is a synergistic relationship that benefits both the customer and the vendor tool.

7.33 Keeping Tools Current

Tools must be kept updated to avoid common security issues and allow users to enjoy the latest features and fixes to key issues, including product defects. But large-scale tools used in the agile ALM automation may require a full systems lifecycle to manage the testing and upgrade process itself. We have seen situations where vendor-supplied updates cause major outages, so customers should always manage the process of keeping tools current with care.

7.34 Conclusion

There are many things to consider when automating the agile ALM. DevOps, along with most aspects of the software and systems delivery process, cannot succeed without excellent automation. Processes are very important in the agile ALM, but tools and automation are also first-class citizens that must be managed effectively. Get this right and you will have many competitive advantages and be a long way down the path of successfully implementing your agile ALM!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.47.25