© Joachim Rossberg 2019
Joachim RossbergAgile Project Management with Azure DevOpshttps://doi.org/10.1007/978-1-4842-4483-8_1

1. Introduction to Application Life Cycle Management

Joachim Rossberg1 
(1)
Kungsbacka, Sweden
 

What do you think about when you hear the term application life cycle management (ALM) ? During a seminar tour in 2005 in Sweden, presenting on Microsoft’s Visual Studio Team System, we asked people what ALM was and whether they cared about it. To our surprise, many people equated ALM with Operations and Maintenance. This is still often the case when we visit companies, although today more people are aware of the term.

Was that your answer as well? Does ALM include more than just Operations? Yes, it does. ALM is the thread that ties together the development life cycle. It involves all the steps necessary to coordinate development life cycle activities. Operations are just one part of the ALM process. Other elements range from requirements gathering to more technical things such as the build-and-deploy process.

These days we do not talk as much about ALM as a concept as we used to. These days we talk more about DevOps. But, let’s start by talking some ALM in this chapter to lay a foundation for DevOps and Azure DevOps, as Microsoft calls it.

Microsoft renamed Visual Studio Team Services (VSTS) to Azure DevOps at the end of 2018. As of this writing, Microsoft’s Team Foundation Server (TFS) is in version 2018. In the near future, the TFS name will be changed to Azure DevOps Server.

Aspects of the ALM Process

All software development includes various steps performed by people playing specific roles. There are many different roles, or disciplines, in the ALM process, and some of them are defined in this section. (The process could include more roles, but we focus on the primary ones.)

Look at Figure 1-1, which illustrates ALM and some of its aspects. You can see the flow from the birth of an application, when the business need first arises, to when the business need no longer exists and the application dies. Once the thought of a new application (or system) is born, many organizations do some portfolio rationalization. This means you discuss whether an existing system can handle the need or whether a new system has to be developed. If a new system must be built, you enter the software development life cycle (SDLC) and develop the system, test it, and deploy it into operation. This is the point at which you do a handover so that Operations personnel can maintain and refine the system. Once in production, the system (hopefully) delivers the intended business value until retirement. While in operation, the system usually is updated or undergoes bug fixes; at such times, change requests (CRs) are written. For each CR, you go through the same process again.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig1_HTML.png
Figure 1-1

The application life cycle management process

It’s essential to understand that all business software development is a team effort. The company personnel who play specific roles collaborate on projects to deliver business value to the organization. If you don’t have this collaboration, the value of the system will most likely be considerably less than it could be. One step up from the project level, at the program level, it’s also important to have collaboration between all roles involved in the ALM process so that you perform this process in the most optimal way.

The roles in the ALM process include, but aren’t limited to, the following:
  • Stakeholders: Stakeholders are usually the people who either pay for the project or have decision-making rights about what to build. We like to also include end users in this group, so not only management has a stake in a project.

  • Business manager: Somebody has to decide that a development activity is going to start. After initial analysis of the business needs, a business manager decides to initiate a project to develop an application or system that delivers the expected business value. A business manager, for instance, must be involved in the approval process for a new suggested project, including portfolio rationalization, before a decision to go ahead is made. Information technology (IT) managers are also part of this process because IT staff will probably be involved in the project’s development and deployment into the infrastructure.

  • Project manager, product owner, or Scrum master: Suitable individuals are selected to fill these roles, and they are prepared to work on the project after the decision to go ahead is made. Ideally, these people continue leading the project all the way through, so that you have continuity in project management.

  • Project management office (PMO) decision makers : These individuals are also involved in planning because a new project may change or expand the company’s portfolio.

  • Business analyst: After requirements collection starts, the business analyst has much to do. Usually, initial requirements are gathered when the business need arises, but the real work often begins after portfolio rationalization. A business analyst is responsible for analyzing the business needs and requirements of the stakeholders to help identify business problems and propose solutions. Within the system’s development life cycle, the business analyst typically performs a collaborative function between the business side of an enterprise and the providers of services to the enterprise.

  • Architect: The architect draws an initial picture of the solution. In brief the architect draws the blueprint of the system, and the system designers or engineers use this blueprint. The blueprint includes the level of freedom necessary in the system: scalability, hardware replacement, new user interfaces, and so on. The architect must consider all these issues.

  • User experience (UX) design team: UX design should be a core deliverable, not something you leave to the developers to handle. Unfortunately, this design often overlooked; it should be given more consideration. It’s important to have close collaboration between the UX team (which could be just one person) and the development team. The best solution is to have a UX expert on the development team throughout the project, but sometimes that isn’t possible. The UX design is important in making sure users can perceive the value of the system. You can write the best business logic in the world, but if the UX is designed poorly, users probably won’t think the system is any good.

  • Database administrators (DBAs) : Almost every business system or application uses a database in some way. DBAs can make your databases run like lightning, with good uptime, so it’s essential to use their expertise in any project involving a database. Be nice to them; they can give you lots of tips about how to make a smarter system. Alas, for DBAs, developers handle this work more and more frequently. This means developers are expected to have vertical knowledge and not just focus on coding.

  • Developers: “Developers, developers, developers!” as Microsoft Chief Executive Officer (CEO) Steve Ballmer shouted in a famous video. And who can blame him? These are the people who work their magic to realize the system by using the architecture blueprint drawn from the requirements. Moreover, developers modify or extend the code when CRs come in.

  • Testers: I’d rather not see testing as a separate activity. Don’t get me wrong: It’s a role, but testing is something you should consider from the first time you write down a requirement and should continue doing during the whole process. Testers and test managers help to secure quality, but modern development practices include testing by developers as well. For instance, in test-driven development (TDD), developers write tests that can be automated and run at build time or as part of checking in to version control.

  • Operations and maintenance staff: When an application or system is finished, it’s handed over to operations. The Operations staff takes care of it until it retires, often with the help of the original developers, who come in to do bug fixes and new upgrades. Don’t forget to involve these people early in the process, at the point when the initial architecture is considered, and keep them involved with the project until everything is done. They can provide great input about what can and can’t be done within the company infrastructure. So, Operations is just one part—although an important one—of ALM. In Chapter 3, this book talks about DevOps, which is a practice that ties developers and Operations personnel more closely.

All project efforts are done collaboratively. No role can act separately from the others if you’re to succeed with any project. It’s essential for everybody involved to have a collaborative mind-set and to have the business value as their primary focus at every phase of the project.

If you’re part of an Agile development process, such as a Scrum project, you might have only three roles: product owner, Scrum master, and team members. This doesn’t mean the roles just described don’t apply though! They’re all essential in most projects; it’s just that, in an Agile project, you may not be labeled a developer or an architect. Rather, you’re a team member, and you and your comembers share responsibility for the work you’re doing. We go deeper into the Agile world later in Chapter 4.

Four Ways of Looking at ALM

ALM is the glue that ties together the roles just discussed and the activities they perform. Let’s consider four ways of looking at ALM (Figure 1-2). We’ve chosen these four because we’ve seen this separation in many of the organizations with which we’ve worked or individuals to whom we’ve spoken:
  1. 1.

    SDLC view: The SDLC view is perhaps the most common way of looking at ALM, because development has “owned” management of the application life cycle for a long time. This could be an effect of the gap between the business side and the IT side in most organizations, and IT has taken the lead.

     
  2. 2.

    Service management or operations view : Operations has also been (in our experience) unfortunately separated from IT development. This has resulted in Operations having its own view of ALM and the problems that can occur in this area.

     
  3. 3.

    Application portfolio management (APM) view: Again, perhaps because of the gap between business and IT, we’ve seen many organizations with a portfolio ALM strategy in which IT development is only one small part. From a business viewpoint, the focus has been on how to handle the portfolio, not on the entire ALM process.

     
  4. 4.

    Unified view : Fortunately, some organizations focus on the entire ALM process by including all three of the preceding views. This is the only way to take control of, and optimize, ALM. For a chief information officer (CIO), it’s essential to have this view all the time; otherwise, things can get out of hand easily.

     
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig2_HTML.png
Figure 1-2

The four ways of looking at ALM

Let’s look at these four views in more detail, starting with the SDLC view.

The SDLC View

Let’s consider ALM from an SDLC perspective first. In Figure 1-3, you can see the different phases of a typical development project and the roles most frequently involved. Keep in mind that this is a simplified view for the sake of this discussion. We’ve also tried to fit in the different roles from the ALM process presented earlier.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig3_HTML.png
Figure 1-3

A simplified view of a typical development project

First, somebody comes up with an idea based on an analysis of business needs: “Hey, wouldn’t it be great if we had a system that could help us do [whatever the idea is]?” It can also be the other way around: The idea comes first, and the business value is evaluated based on the idea.

An analysis or feasibility study is performed, costs are estimated, and (hopefully) a decision is made by IT and business management to start an IT project. A project manager (PM) is selected to be responsible for the project and begins gathering requirements with the help of business analysts, PMO decision makers, and users or others affected. The PM also starts planning the project in as much detail as possible at this moment.

When that is done, the architect begins looking at how to realize the new system, and the initial design is chosen. The initial design is evaluated and updated based on what happens during the project and how requirements change throughout the project. Development beings, including work performed by developers, user interface (UI) designers, and DBAs (and anyone else not mentioned here but important for the project).

Testing is, at least for us, something done all along the way—from requirements specification to delivered code—so it doesn’t get a separate box in Figure 1-3. We include acceptance testing by end users or stakeholders in the Development box. After the system has gone through acceptance testing, it’s delivered to Operations for use in the organization. Of course, the process doesn’t end there. This cycle is generally repeated over and over as new versions are rolled out and bug fixes are implemented.

What ALM does in this development process is support the coordination of all development life cycle activities by doing the following:
  • Makes sure you have processes that span these activities.

  • Manages the relationships between development project artifacts used or produced by these activities (in other words, provides traceability). These artifacts include UI mockups done during requirements gathering, source code, executable code, build scripts, test plans, and so on.

  • Reports on progress of the development effort as a whole so you have transparency for everyone regarding project advancement.

As you can see, ALM doesn’t support a specific activity. Its purpose is to keep all activities in sync. It does this so you can focus on delivering systems that meet the needs and requirements of the business. By having an ALM process that helps you synchronize your development activities, you can determine more easily whether an activity is underperforming and thus take corrective actions.

The Service Management or Operations View

From a Service Management or Operations view, you can look at ALM as a process that focuses on the activities that are involved with the development, operation, support, and optimization of an application so that it meets the service level that has been defined for it.

When you see ALM from this perspective, it focuses on the life of an application or system in a production environment. If, in the SDLC view, the development life cycle starts with the decision to go ahead with a project, here it starts with deployment into the production environment. Once deployed, the application is controlled by the Operations crew. Bug fixes and CRs are handled by them, and they also pat it on its back to make it feel good and run smoothly.

This is a healthy way of looking at ALM in our opinion: Development and Operations are two pieces of ALM, cooperating to manage the entire ALM process. You should consider both pieces from the beginning when planning a development project; you can’t have one without the other.

The APM View

In the APM view of ALM, you see the application as a product managed as part of a portfolio of products. APM is a subset of project portfolio management (PPM).1 Figure 1-4 illustrates this process.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig4_HTML.png
Figure 1-4

The APM view of ALM

This view comes from the Project Management Institute (PMI). Managing resources and the projects on which they work is very important for any organization. In Figure 1-4, you can see that the product life cycle starts with a business plan—the product is an application or system that is one part of the business plan. An idea for an application is turned into a project and is carried out through the project phases until it’s turned over to Operations as a finished product.

When business requirements change or a new release (an upgrade, in Figure 1-4) is required for some other reason, the project life cycle starts again, and a new release is handed over to Operations. After a while (maybe years), the system or application is discarded (this is called divestment , which is the opposite of investment). This view doesn’t speak specifically about the operations part or the development part of the process but should instead be seen in the light of APM.

The Unified View

Finally, there is a unified view of ALM. In this case, an effort is made to align the previous views with the business. Here you do as the CIO would do: You focus on business needs, not on separate views. You do this to improve the capacity and agility of a project from beginning to end. Figure 1-5 shows an overview of the unified ALM view of a business.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig5_HTML.png
Figure 1-5

The unified view takes into consideration all three previously mentioned views

You probably recognize this figure from Figure 1-1. We want to stress that with the unified view, you need to consider all aspects—from the birth to the death of an application or a system—hence the curved arrow that indicates continuous examination of an application or system and how it benefits the business

Three Pillars of Traditional ALM

Let’s now look at some important pillars of ALM that are independent of the view you might have (Figure 1-6). These pillars were first introduced by Forrester Research.2
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig6_HTML.png
Figure 1-6

The three pillars of ALM

In the following sections, we examine these pillars in greater detail, starting with traceability.

Traceability

Some customers we’ve seen have stopped doing upgrades on systems running in production because their companies had poor or no traceability in their systems. For these customers, it was far too expensive to do upgrades because of the unexpected effects even a small change could have. The companies had no way of knowing which original requirements were implemented where in the applications. The effect was that a small change in one part of the code might affect another part, which would come as a surprise because poor traceability meant they had no way of seeing the code connection in advance. One customer claimed (as we’ve heard in discussions with many other customers) that traceability can be a major cost driver in any enterprise if not done correctly.

There must be a way to trace requirements all the way to delivered code—through architect models, design models, build scripts, unit tests, test cases, and so on—not only to make it easier to go back into the system when implementing bug fixes, but also to demonstrate that the system has delivered the things the business wants.

You also need traceability to achieve internal as well as external compliance with rules and regulations. If you develop applications for the medical industry, for example, you must comply with Food and Drug Administration (FDA) regulations. You also need traceability when CRs come in so you know where you updated the system and in which version you performed the update.

Automation of High-Level Processes

The next pillar of ALM is automation of high-level processes. All organizations have processes. For example, approval processes control handoffs between the analysis and design or build steps, or between deployment and testing. Much of this is done manually in many projects, and ALM stresses the importance of automating these tasks for a more effective and less time-consuming process. Having an automated process also decreases the error rate compared to handling the process manually.

Visibility into the Progress of Development Efforts

The third and last pillar of ALM is providing visibility into the progress of development efforts. Many managers and stakeholders have limited visibility into the progress of development projects. The visibility they have often comes from steering group meetings, during which the PM reviews the current situation. Some would argue that this limitation is good; but, if you want an effective process, you must ensure visibility.

Other interest groups, such as project members, also have limited visibility of the entire project despite being part of the project. This is often a result of the fact that reporting is difficult and can involve a lot of manual work. Daily status reports take too much time and effort to produce, especially when you have information in many repositories.

A Brief History of ALM Tools and Concepts

You can resolve the three pillars of ALM manually if you want to, without using tools or automation. (ALM isn’t a new process description, even though Microsoft, IBM, Hewlett-Packard (HP), Atlassian, and the other big software houses are pushing ALM to drive sales of their respective ALM solutions.) You can, for instance, continue to use Excel spreadsheets or, like one of our most dedicated Agile colleagues, use sticky notes and a pad of paper to track requirements through use cases/scenarios, test cases, code, build, and so on, to delivered code. It works, but this process takes a lot of time and requires much manual effort. With constant pressure to keep costs down, you need to make the tracking of requirements more effective.

Of course, project members can simplify the process by keeping reporting to the bare minimum. With a good tool or set of tools, you can cut time (and thus costs) and effort, and still get the required traceability you want in your projects. The same goes for reporting and other activities. Tools can, in our opinion, help you be more effective and also help you automate much of the ALM process into the tools.

Having the process built directly into your tools helps prevent the people involved from missing important steps by simplifying things. For instance, the Agile friend we mentioned could definitely gain much from this, and he is looking into Microsoft’s TFS to determine how that set of tools can help him and his teams be more productive. Process automation and the use of tools to support and simplify daily jobs are great, because they can keep you from making unnecessary mistakes.

Serena Software Inc. is one supplier of ALM tools, and the company has interesting insight into ALM and related concepts. According to Serena Software, there are eight ALM concepts3:

  1. 1.

    Modeling: Software modeling

     
  2. 2.

    Issue management: Keeping track of incoming issues during both development and operations activities

     
  3. 3.

    Design: Designing the system or application

     
  4. 4.

    Construction: Developing the system or application

     
  5. 5.

    Production monitoring: The work of the operations staff

     
  6. 6.

    Build: Building the executable code

     
  7. 7.

    Test: Testing the software

     
  8. 8.

    Release management: Planning application releases

     
To synchronize these concepts, according to Serena Software, you need tools that span them and that help you automate and simplify the following activities. If you look closely, you can see that these activities compare to ALM 2.0+, which we examine shortly:
  • Reporting

  • Traceability

  • Policies

  • Procedures

  • Processes

  • Collaboration

Imagine the Herculean task of keeping all these things in order manually. It’s impossible, if you want to get things right and keep an eye on the project’s status. Projects today seem to be going better because the number of failed projects is decreasing. Much of this progress is, according to Michael Azoff at the Butler Group,4 the result of “major changes in software development: open source software projects; the Agile development movement; and advances in tooling, notably Application Lifecycle Management (ALM) tools.” Some of these results have also been confirmed by later research, such as that by Scott W. Ambler at Ambysoft.5 Now you understand why finding tools and development processes to help you with ALM is important.

There is increasing awareness of the ALM process among enterprises, and we see this among our customers. ALM is much more important now than it was only five years ago.

ALM 1.0

Forrester Research has introduced some very useful concepts for ALM,6 including different versions of ALM and ALM tools. This section looks at how Forrester defined ALM 1.0, then continues to the latest version: ALM 2.0+.

As software has become more and more complex, role specialization has increased in IT organizations. This has led to functional silos in different areas (roles), such as project management, business analysis, architecture, development, database administration, testing, and so on. As you may recall from the beginning of this chapter, you can see this in the ALM circle. Having these silos in a company isn’t a problem, but having them without any positive interaction between them is an issue.

There is always a problem when you build impenetrable walls around you. ALM vendors have driven this wall construction, because most of their tools have, historically, been developed for particular silos. For example, if you look at build-management tools, they have supported the build silo (naturally), but have little or no interaction with test and validation tools (which is strange because the first thing that usually happens in a test cycle is the build). This occurs despite the fact that interaction among roles can generate obvious synergies with great potential. You need to synchronize the ALM process to make the role-centric processes part of the overall process. This might sound obvious, but it hasn’t happened until recently.

Instead of having complete integration among the roles or disciplines mentioned at the start of this chapter, and the tools they use, there has been point-to-point integration. For example, a development tool is integrated slightly with a testing tool (or, probably, the other way around). Each tool uses its own data repository, so traceability and reporting are hard to handle in such an environment (Figure 1-7).
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig7_HTML.jpg
Figure 1-7

ALM 1.0

This point-to-point integration makes the ALM process fragile and expensive. However, this isn’t just a characteristic of ALM 1.0; it’s true for all integrations. Imagine that one tool is updated or replaced. The integration may break, and then new solutions have to be found to get it working again. This scenario can be a reality if, for example, old functions in the updated or replaced tool are obsolete and the new tool doesn’t support backward compatibility. What would happen if Microsoft Word (to take an easy example) suddenly stopped supporting older Word files? There would be more or less a riot among users until the situation was fixed. This can be hard to solve even with integration between two tools. What if you have a more complex situation, including several tools? We’ve seen projects that use six or seven tools during development, which creates a fragile solution when new versions are released.

Tools have also been centered on one discipline. In real life, a project member working as a developer, for instance, often also acts as an architect or a tester. Because the people in each of these disciplines have their own tool (or set of tools), the project member must use several tools and switch among them. It could also be that the task system is separated from the rest of the tools, so to start working on a task, a developer must first retrieve the task from the task system—perhaps they must print it out or copy and paste it—then open the requirements system to check the requirement, then look at the architecture in that system, and finally open the development tool to begin working. Hopefully, the testing tools are integrated into the development tool; otherwise, yet another tool must be used. All this switching costs valuable time that could be better put into solving the task.

Having multiple tools for each project member is obviously costly as well, because everyone needs licenses for the tools they use. Even with open-source tools that may be free of charge, you have maintenance costs, adaptions of the tools, developer costs, and so on. Maintenance can be very expensive, so you shouldn’t forget this even when the tools are free. Such a scenario can be very costly and very complex. It’s probably also fragile.

As an example, take two coworkers at a large medical company in Gothenburg. They use a mix of tools in their everyday work. We asked them to estimate how much time they needed to switch among tools and transfer information from one tool to another. They estimated they spend half an hour to an hour each day syncing their work. Usually, they’re on the lower end of that scale, but in the long run, all the switching takes a lot of time and money. Our friends also experience problems whenever they need to upgrade any of the systems they use.

One other problem with traditional ALM tools that’s worth mentioning is that vendors often add features—for example, adapting a test tool to support issue and defect management. In the issue management system, some features may have been added to support testing. Because neither tool has enough features to support both disciplines, users are confused and don’t know which tool to use. In the end, most purchase both, just to be safe, and end up with the integration issues described earlier.

ALM 2.0

Let’s look at what the emerging tools and practices (including processes and methodologies) in ALM 2.0 try to do for you. ALM is a platform for the coordination and management of development activities, not a collection of life cycle tools with locked-in and limited ALM features. Figure 1-8 and Table 1-1 summarize these efforts.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig8_HTML.jpg
Figure 1-8

ALM 2.0

Table 1-1

Characteristics of ALM 2.0

Characteristic

Benefit

Practitioner tools assembled from plug-ins

Customers pay only for the features they need.

Practitioners find the features they need more quickly.

Common services available across practitioner tools

Easier for vendors to deploy enhancements to shared features.

Ensures correspondence of activities across practitioner tools.

Repository neutral

No need to migrate old assets.

Better support for cross-platform development.

Use of open integration standards

Easier for customers and partners to build deeper integrations with third-party tools.

Microprocesses and macroprocesses governed by an externalized workflow

Processes are “versionable” assets.

Processes can share common components.

One of the first things you can see is a focus on plug-ins. This means, from one tool, you can add the features you need to perform the tasks you want, without using several tools! If you’ve used Visual Studio, you’ve seen that it’s straightforward to add new plug-ins to the development environment. Support for the Windows Communication Foundation (WCF) and Windows Presentation Services, for example, was available as plug-ins long before support for them was added as part of Visual Studio 2008.

Having the plug-in option and making it easy for third-party vendors to write plug-ins for the tool greatly eases the integration problems discussed earlier. You can almost compare this to a smorgasbord, where you choose the things you want. So far, this has mostly been adopted by development tool vendors such as IBM and Microsoft, but more plug-ins are coming. IBM has its Rational suite of products, and Microsoft has TFS.

Another thing that eases development efforts is that vendors in ALM 2.0 focus more on identifying features common to multiple tools and integrating them into the ALM platform, including the following:
  • Collaboration

  • Workflow

  • Security

  • Reporting and analysis

Another goal of ALM 2.0 is that the tools should be repository neutral. There shouldn’t be a single repository, but many, so you aren’t required to use the storage solution the vendor proposes. IBM, for example, has declared that its forthcoming ALM solution will integrate with a wide variety of repositories, such as Concurrent Versions System (CVS) and Subversion, just to mention two. This approach removes the obstacle of gathering and synchronizing data, giving you easier access to progress reports and so on. Microsoft uses an extensive set of web services and plug-ins to solve the same issue. It has one storage center (SQL Server); but, by exposing functionality through the use of web services, Microsoft has made it fairly easy to connect to other tools as well.

An open and extensible ALM solution lets companies integrate their own choice of repository into the ALM tool. Both Microsoft and IBM have solutions—data warehouse adapters—that enable existing repositories to be tied into the ALM system. A large organization that has invested in tools and repositories probably doesn’t want to change everything for a new ALM system; hence, it’s essential to have this option. Any way you choose to solve the problem will work, giving you the possibility of having a well-connected and synchronized ALM platform.

Furthermore, ALM 2.0 focuses on being built on an open integration standard. As you know, Microsoft exposes TFS functionality through web services. This isn’t documented publicly and isn’t supported by Microsoft, however, so you need to do some research and go through some trial and error to get it working. This way, you can support new tools as long as they also use an open standard, and third-party vendors have the option of writing cool and productive tools.

Process support built in to the ALM platform is another important feature. By this, we mean having automated support for the ALM process built right into the tools. You can, for instance, have the development process (RUP, Scrum, XP, and so on) automated in the tool, reminding you of each step in the process so you don’t miss creating and maintaining any deliverables or checkpoints.

In the case of TFS, this support includes having the document structure, including templates for all documents, available on the project web site as soon as a new TFS project is created. You can also imagine a tool with built-in capabilities that help you with requirements gathering and specification—for instance, letting you add requirements and specs to the tool and have them transformed into tasks that are assigned to the correct role without you having to do this manually.

An organization isn’t likely to scrap a way of working just because the new ALM tool says it can’t import that specific process. A lot of money has often been invested in developing a process, and an organization won’t want to spend the same amount again learning a new one. With ALM 2.0, it’s possible to store the ALM process in a readable format such as XML.

The benefits include the fact that the process can be easily modified, version controlled, and reported on. The ALM platform can then import the process and execute the application development process descriptions in it. Microsoft, for example, uses XML to store the development process in TFS. The process XML file describes the entire ALM process, and many different process files can coexist. This means you can choose the process template on which want to base your project when creating a new project.

It’s also important for an enterprise to have control over its project portfolio to allocate and control resources more effectively. So far, none of the ALM vendors have integrated this support into the ALM platform. There may be good reasons for this, though. For instance, although portfolio management may require data from ALM, the reverse probably isn’t the case. The good thing is that having a standards-based platform makes integration with PPM tools much easier.

ALM 2.0+

So far, not all ALM 2.0 features have been implemented by most of the major ALM tool vendors. There are various reasons for this. One is that it isn’t easy for any company to move to a single integrated suite, no matter how promising the benefits may appear. Making such a switch means changing the way you work in your development processes and maybe even throughout your company. Companies have invested in tools and practices, and spending time and money on a new platform may require considerably more investment.

For Microsoft-focused development organizations, the switch might not be as difficult, however—at least not for the developers. They already use Visual Studio, SharePoint, and many other applications in their daily life, and the switch isn’t that great. But Microsoft isn’t the only platform out there, and competitors like IBM, Serena, and HP still have some work to do to convince the market.

In addition, repository-neutral standards and services haven’t evolved over time. Microsoft, for instance, still relies on SQL Server as a repository and hasn’t built in much support for other databases or services. The same goes for most competition to TFS.

Note

Virtually all vendors use ALM tools to lock in customers to as many of their products as possible—especially expensive, major strategic products like relational database management systems (RDBMSs). After all, these companies live mostly on license sales.

The growth of Agile development and project management in recent years has also changed the way ALM must support development teams and organizations. There has been a clear change from requirements specs to backlog-driven work, and the tools you use need to support this.

It becomes critical for ALM tools to support Agile practices such as build-and-test automation. TDD is being used with increasing frequency, and more and more developers require their tools to support this way of working. If the tools don’t do this, they’re of little use to an Agile organization. Microsoft has taken the Agile way of working to heart in the development of TFS, and this book will show you all you need to know about TFS’s support for Agile practices.

There has also been a move from traditional project management toward an Agile view in which the product owner and Scrum master require support from the tools. Backlog refinement (the art of refining requirements in the Agile world), Agile estimation and planning, and reporting—important to these roles—need to be integrated to the overall ALM solution.

The connection between Operations and Maintenance also becomes more and more important. ALM tools should integrate with the tools used by these parts of the organization.

In the report “The Time Is Right for ALM 2.0+,” Forrester Research presented the ALM 2.0+ concept, illustrated in Figure 1-9 7. This report extended traditional ALM with what Forrester called ALM 2.0+. Traditional ALM covers traceability, reporting, and process automation, as you’ve seen. Forrester envisions the future of ALM also including collaboration and work planning.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig9_HTML.jpg
Figure 1-9

Future ALM, according to Forrester Research

These concepts are essential and are discussed in detail in this book. A chapter is dedicated to each one except for traceability; traceability and visibility are combined into one chapter because they are closely related. The book’s focus is on ALM 2.0+, but it includes some other older concepts as well. We’ve already looked at the first three cornerstones, but let’s briefly examine the two new ones introduced in ALM 2.0+:
  1. 1.

    Work planning: For this concept, Forrester includes planning functions, such as defining tasks and allocating them to resources. These planning functions shouldn’t replace the strategic planning functions that enterprise architecture and portfolio management tools provide. Instead, they help you execute and provide feedback on those strategic plans. Integration of planning into ALM 2.0+ helps you follow up on projects so you can obtain estimates and effort statistics, which are essential to all projects.

     
  2. 2.

    Collaboration: As mentioned, collaboration is essential. ALM 2.0+ tools must support the distributed development environment that exists in organizations. The tools must help team members work effectively—sharing, collaborating, and interacting as if they were collocated. The tools should also do this without adding complexity to the work environment.

     

We take a closer look at these topics farther down the road. But before we do that, we examine a new topic on the horizon: DevOps. DevOps is important because it has the potential to solve many ALM problems.

DevOps

In the past couple years, the concept of DevOps has emerged. In our view, DevOps is close to, or even the same as, the unified view of ALM presented earlier in the chapter. One big difference compared to a more traditional approach is that DevOps brings development and operations staff closer—not just in thought, but also physically. Because they’re all part of the DevOps team, there is no handoff from one part to the other. Team members work together to deliver business value through continuous development and operations. Figure 1-10 shows how Microsoft looks at DevOps ( https://azure.microsoft.com/en-us/overview/devops/ ).
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig10_HTML.jpg
Figure 1-10

DevOps according to Microsoft

DevOps isn’t a method on its own; instead, it uses known Agile methods and processes like Kanban and Scrum, which are popular in many IT organizations. Basically, these are project management methods based on Agile concepts and are used for development (mostly Scrum) and operations (mostly Kanban). The key concepts are continuous development, continuous integration, and continuous deployment. What is important is working with small changes instead of large releases (which minimizes risk), getting rid of manual steps by automating processes, and having development and test environments that are as close as possible to the production environment.

The purpose of DevOps is to optimize the time from the development of an application until it’s running stably in the production environment. The quicker you can get from idea to production, the quicker you can respond to changes in, and influences from, the market, which is crucial to have a successful business.

Azure DevOps Introduction

In 2018, Microsoft introduced the concept of Azure DevOps, which includes a new suite of tools or services. This new concept was previously known Visual Studio Team Services, or VSTS. Azure DevOps Services is a cloud service for collaborating on application development ( https://docs.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops-services?view=vsts ). It provides an integrated set of features that you access through your web browser or integrated development environment (IDE) client, including the following:
  • Git repositories for source control of code

  • Build-and-release services to support continuous integration and delivery of apps

  • Agile tools to support planning and tracking work, code defects, and issues using Kanban and Scrum processes

  • A variety of tools to test your apps, including manual/exploratory testing, load testing, and continuous testing

  • Highly customizable dashboards for sharing progress and trends

  • Built-in wiki for sharing information with your team

The Azure DevOps ecosystem also provides support for adding extensions, integrating with other popular services (such as Campfire, Slack, Trello, UserVoice, and more), and developing your own custom extensions.

Azure DevOps is available as a set of services that we can chose to use parts or all of it. This means we can tailor it to our specific needs. Each Azure DevOps service is open and extensible. They work great for any type of application regardless of the framework, platform, or cloud. You can use them together for a full DevOps solution or with other services. Here is an overview of the services available:
  • Azure Boards: Powerful work tracking with Kanban boards, backlogs, team dashboards, and custom reporting. This is where this book has its biggest focus.

  • Azure Pipelines : Continuous integration/ continuous delivery (CI/CD) that works with any language, platform, and cloud. Connect to GitHub or any Git repository and deploy continuously.

  • Azure Artifacts : Maven, npm, and NuGet package feeds from public and private sources

  • Azure Repos : Unlimited cloud-hosted private Git repos for your project; collaborative pull requests, advanced file management, and more

  • Azure Test Plans : All-in-one planned and exploratory testing solution

With Azure DevOps comes a brand new graphic user interface (GUI) that has been in preview since late 2018 (Figure 1-11). All settings and navigation now take place in the navigation bar on the left. From there, you can access every aspect of your Azure DevOps project.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig11_HTML.jpg
Figure 1-11

The new GUI of Azure DevOps

Clicking Boards, for instance, expands the menu on the left and show more options for viewing boards and backlogs (Figure 1-12).
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig12_HTML.jpg
Figure 1-12

The new GUI of Azure DevOps can be expanded to show more options for the Azure DevOps services—in this case, for Azure Boards

The project settings are now available in the lower left corner, as shown in Figure 1-13. This is the place for configuring teams, security, notifications, iterations, areas, and so on. We will see these features throughout this book.
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig13_HTML.jpg
Figure 1-13

Project settings are accessed at the bottom of the navigation bar

If you will not be using all Azure DevOps services, you can turn them off (or on) from the overview panel of project settings (Figure 1-14).
../images/477063_1_En_1_Chapter/477063_1_En_1_Fig14_HTML.jpg
Figure 1-14

Turning on Azure DevOps services from project settings

As stated earlier in this chapter, TFS server will be renamed to Azure DevOps Server. It will still be an on-prem installation and it will be exciting to see the direction it takes.

Summary

This chapter presented an overview of what ALM aims for and what it takes for the ALM platform to support a healthy ALM process. You’ve seen that ALM is the coordination and synchronization of all development life cycle activities. There are four ways of looking at it:
  1. 1.

    SDLC view

     
  2. 2.

    Service management or operations view

     
  3. 3.

    APM view

     
  4. 4.

    Unified view

     

Traceability, automation of high-level processes, and visibility into development processes are three pillars of ALM. Other key components are collaboration, work planning, workflow, security, reporting, analytics, being open-standards based, being plug-in friendly, and much more. A good ALM tool should help you implement and automate these pillars and components to deliver better business value to your company or organization.

We also examined the concept of Azure DevOps services and Azure DevOps Server, Microsoft’s newest set of tools for developers and others involved in the development of applications and systems. Throughout this book we will mostly use the features of Azure Boards.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.187.233