©  Ambily K K 2020
A. K KAzure DevOps for Web Developershttps://doi.org/10.1007/978-1-4842-6412-6_5

5. Test Management Using Azure DevOps

Ambily K K1  
(1)
Hyderabad, India
 
The Azure DevOps service has all the test-related features needed to support end-to-end test management. By default, the test management features are not available, so you must enable them. You can enable the test plan in the Billing section on the Organization Settings screen of Azure DevOps to start exploring features such as test planning, tracking, execution, browser-based test annotations, centralized reporting, and so on (Figure 5-1). This chapter will explain the various test management features available in the Azure DevOps service.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig1_HTML.jpg
Figure 5-1

Billing page

Azure DevOps provides three different test management features: test plans, test suites, and test cases.
  • Test plan: This is a group of test suites and test cases to be executed in an iteration or in an area.

  • Test suite: This is a group of test cases to validate a scenario or use case. It can be a static test suite, a requirement-based test suite, or a query-based test suite; we will discuss more about test suites in the “Test Suites” section of this chapter.

  • Test cases: These validate a single functionality of the application. Test cases can be added to test suites or test plans and can be mapped to multiple suites or plans at a time.

Test Cases

Test cases are the basic test elements to validate the functionality of an application or the nonfunctional requirements associated with an application. In test management, test cases are categorized based on their usage. The following are a few of them:
  • Unit test case: This will be used by developers to verify the completeness of their implementation. There are many tools and technologies available associated with each of the implementation technologies to automate the unit test cases. There are many unit test frameworks and libraries such as Junit for Java-based applications, NUnit for .NET, Jasmine and Karma for Angular, and Mocha and Chai for React.

  • Functional test case: This is a test case used to validate the functional implementation of the system.

  • Security test case: This verifies the security constraints with the system. For example, security test cases may verify any cross-site scripting issues with a web application.

  • Performance test case: This verifies the performance of the system with different parameters such as data volume, user load, stress testing, and so on.

  • User acceptance test: This is acceptance testing by the business stakeholders.

There are many test cases such as regression testing, integration testing, smoke testing, sanity testing, system testing, BVT test cases, and so on. As a good practice, test cases will be reviewed by test leads or the functional SME to align the test cases with the expected functionalities. Most of the NFR-related test cases will be reviewed by the architect.

In most cases, the test cases will have few predefined fields or information such as the steps for verifying the functionality, acceptance criteria for each of the steps, etc. Azure DevOps provides a test case template, which can be customized to fit the individual project requirements. Figure 5-2 shows the default template.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig2_HTML.jpg
Figure 5-2

Test case template

The main fields are the steps defining the the actions and expected result, and the parameters. A step defines an action by the user to validate a user interaction in the system. For example, a user can provide credentials in a login form and click the Login button to login to the system. When the user performs an action such as the login, user input is required. When a tester executes such steps, they should pass various values to identify whether the system is working properly and responding to different user input values. These values define the positive cases, negative cases and boundary conditions. If a test step requires a user value that can vary and return different results, then such fields are marked as parameters.

Shared steps are another important concept; these are where a set of steps will be shared across multiple test cases. For example, the login test steps will repeat across most of the test cases that require the user authentication. The user can define such steps along with actions, results, and parameters as a shared step and reuse it in other test cases to avoid duplication of the same content. Follow the Don’t Repeat Yourself (DRY) principle to avoid duplicating the same steps in another test cases.

Shared Steps and Parameters

As mentioned earlier, a shared step is a set of test case steps that can be reused in multiple test cases. For example, the steps for logging in to the system can be defined as a shared step and included in many other test cases.

Define the steps, select them to group them as shared steps, and click the “Create shared steps” icon on the top of the steps (third icon), as shown in Figure 5-3.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig3_HTML.jpg
Figure 5-3

Shared steps

Provide a name for the steps to continue. This will replace all the existing steps with a single step referenced by the name of the shared steps, as shown in Figure 5-4.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig4_HTML.jpg
Figure 5-4

Shared steps inserted

Once you double-click the step name, it opens the newly added “Shared steps” work item, as shown in Figure 5-5.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig5_HTML.jpg
Figure 5-5

“Shared step” work item

Users can further customize the steps by adding parameters, which expect data from users at the time of execution. For the previous scenario, add the parameters username and password and bind to the steps.

Parameters can be added by selecting the parameter option (../images/501036_1_En_5_Chapter/501036_1_En_5_Figa_HTML.jpg) or by adding variables with @, as in @username. Once they’re added, navigate back to the test case where the parameters will be listed in the “Parameter values” section. Provide the set of values expected for the parameter, as shown in Figure 5-6.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig6_HTML.jpg
Figure 5-6

Parameter values

If the parameters are used across multiple test cases, they can be added as a shared parameter set. Just like shared steps, a shared parameter set will be available for all test cases to reuse the same set of data across multiple test scenarios.

In Figure 5-6, select the parameters and click the “Convert to shared parameters” link to convert it to a shared parameter set. The “Parameters” section will cover more about the parameters.

Test Suites

Test suites are a set of test cases executed to validate the implementation of a feature or component. We can define the test suites using a static suite, a requirement-based suite, and a query-based suite, as shown in Figure 5-7.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig7_HTML.jpg
Figure 5-7

Test suites

Here are the differences between the three options:
  • Static suite: Add the test cases manually to the suite.

  • Requirement based suite: Select the test cases based on the requirements.

  • Query based suite: Select the test cases using a query.

Test Plans

A test plan defines the test suites and test cases corresponding to an area or iteration. It is used for grouping the test cases based on the features prioritized in an iteration or related to a specific area.

This section displays the existing plans and allow us to create new test plans, as shown in Figure 5-8.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig8_HTML.jpg
Figure 5-8

Test Plans section

Select the New Test Plan option to create a new test plan, as shown in Figure 5-9.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig9_HTML.jpg
Figure 5-9

Creating a new test plan

Once the test plan is created, the user will be redirected to the details page to add new test cases. Test cases can be added to the test plan using the following two options:
  • Add existing test cases: This adds an existing test case. This option opens a query window to select the test cases based on a query, as shown in Figure 5-10. The user can edit the query to filter the test cases and select the specific test cases from the result set.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig10_HTML.jpg
Figure 5-10

Adding the test cases to suite

  • Add test cases using grid: This option supports the entry of multiple test cases using an Excel or table format. The user can create test cases in Excel or CSV format, and then testcases can be copied directly to the grid view to insert the test cases. This feature helps the testers to work offline.

    Enter the test case title in the first column and define the steps associated with the test case in the second column. Each step should be specified in different rows along with the expected result, as shown in Figure 5-11.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig11_HTML.jpg
Figure 5-11

Adding test cases using a grid

The Assigned To and State columns can be filled based on the requirements or left empty. The default option is for the system to add the state Design to every new test case. Once the test cases are added, click the Save icon (../images/501036_1_En_5_Chapter/501036_1_En_5_Figb_HTML.jpg) on top of the grid to save the test cases.

Once they’re saved, the system will create new test cases and display the ID for each test case, as shown in Figure 5-12.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig12_HTML.jpg
Figure 5-12

Test cases, grid view

Define Tab

The Define tab shows the existing test cases corresponding to the selected test plan, as shown in Figure 5-13. The user can change the list view into a grid view, which allows for multiple test case editing. Also, the system provides the options to reorder the test cases using the move up and move down icons, to select columns, and to filter the test cases using various filter options.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig13_HTML.jpg
Figure 5-13

Define tab

For each of the test cases, there are a set of options available.
  • View Linked Items: This option shows the linked test suites, requirements, and bugs. If the system establishes proper traceability, the user can identify the requirements and bugs associated with the current test case, as shown in Figure 5-14.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig14_HTML.jpg
Figure 5-14

Linked items

  • Open test case: This option opens the test case.

  • Assign configuration: This specifies the system configurations required to execute the test cases; refer to the “Configurations” section for more details, as shown in Figure 5-15.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig15_HTML.jpg
Figure 5-15

Assigning configurations

  • Remove: This option removes the selected test case.

  • Edit test cases(s) in grid: This allows you to edit multiple test cases in a grid.

  • Edit test case(s): This allows you to edit the test case or test cases by selecting specific fields, as shown in Figure 5-16.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig16_HTML.jpg
Figure 5-16

“Edit work items” screen

  • Copy test case(s): This copies the test case.

  • Export test case(s) to csv: This allows you to export the selected test cases to CSV.

Execute Tab

Once the test plan with a few test cases is ready, then navigate to the Execute tab to execute the test cases. The user can select one or more test cases and mark them as Pass Test, Fail Test, Block Test, and Not Applicable, as shown in Figure 5-17.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig17_HTML.jpg
Figure 5-17

Executing a test plan

You’ll see a list of run options available to execute the selected test cases. The run options available are as follows: Run for web application, Run for desktop application, and Run with options.

Also, the user can select other columns using the columns option (../images/501036_1_En_5_Chapter/501036_1_En_5_Figc_HTML.jpg) available on top of the tests. There is another option to filter the list of test cases using various parameters, as in Figure 5-18.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig18_HTML.jpg
Figure 5-18

Filtering option

Run for Web Application

Select one or more test cases and click the “Run for web application” option. This will open the Runner window with many options and a list of steps associated with the test cases, as shown in Figure 5-19.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig19_HTML.jpg
Figure 5-19

Runner window, test plans

The test steps provide three options: edit(../images/501036_1_En_5_Chapter/501036_1_En_5_Figd_HTML.jpg), Pass test step(../images/501036_1_En_5_Chapter/501036_1_En_5_Fige_HTML.jpg), and Fail test step (../images/501036_1_En_5_Chapter/501036_1_En_5_Figf_HTML.jpg). Click the edit icon, which allows you to edit the step and the expected result associated with the test step, as shown in Figure 5-20.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig20_HTML.jpg
Figure 5-20

Runner window, edit step

The user will get further options to add new test steps, delete the existing step, and move down and up the selected step in the test case. Pass and Fail mark the test step status as part of this execution.

The options available near the test case title enables the user to mark the entire test case with pass, fail, pause, block, and not applicable, as in Figure 5-21.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig21_HTML.jpg
Figure 5-21

Runner window, mark status

The other options available on the top of the Runner window are as follows:
  • Save: Save the changes.

  • Save and Close: Save and close the Runner window.

  • Create new Bug/Add to existing bug: From the Runner window, the user can create a bug related to the current test case execution.

  • Capture screenshot: Capture the current screen to provide more details for bug fixing or feedback.

  • Capture user actions as image action log: Capture all the user actions as image logs, which will help the team to reproduce the execution or defects.

  • Record screen: Add a video recording of the execution of the test case, which will help provide further details about the user actions.

  • Add Comment: Add comments as part of the test case execution.

  • Add attachment: Add additional information as an attachment such as the log files.

  • Record screen with audio: Add a screen recording with audio, where the user can explain the observations clearly.

These options help the QA engineers or testers to provide enough details to the development team to reproduce the defects. Before proceeding with screen recording or image capture, the user should install the Test & Feedback extension for Google Chrome, if the user is using Chrome for browsing the web application, as shown in Figure 5-22.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig22_HTML.jpg
Figure 5-22

Installing the Test & Feedback extension

Once the installation completes, configure the Test & Feedback extension to connect with the Azure DevOps organization, as shown in Figure 5-23.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig23_HTML.jpg
Figure 5-23

Test & Feedback extension

Once integrated with Azure DevOps, relaunch the Runner window to execute the test case with an image or screen recording.

The user can record separate operations against each step of the test cases. In Figure 5-24, the user has passed the first step and failed the second step and tries to create a new bug against the failed step.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig24_HTML.jpg
Figure 5-24

Runner window, new bug

The system automatically captures the steps, status of each test step, and video or image logs. Moreover, the system captures the current system info, which helps the developers to reproduce the issue in a similar setup. Figure 5-25 shows the sample system info captured.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig25_HTML.jpg
Figure 5-25

System info captured

All details related to the bug will be attached to the attachment tab of the bug, as shown in Figure 5-26.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig26_HTML.jpg
Figure 5-26

Attachments

Creating a bug based on test case execution provides enough information such as the steps followed, screen recording, image captured, comments, etc., to reproduce the defects in a similar setup.

The Test & Feedback extension will be used for exploratory testing, where the test cases are not defined in the system. Exploratory testing is the random testing of the system to identify the defects. This is explained in more detail in my article published in Simple Talk (https://www.red-gate.com/simple-talk/dotnet/net-development/exploratory-testing-chrome-plugin/).

The “Test Cases” section discussed the test cases, shared test cases, and parameterized test case and showed an example test case to verify the user login. This test case has parameters associated with it. When such test cases with parameters get executed, it prompts the execution of the test case steps for each of the parameter values. The test case execution for each parameter set will be added as an iteration in the test runner. In Figure 5-27, the parameter set contains three sets of values, indicating three iterations.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig27_HTML.jpg
Figure 5-27

Runner window, multiple iterations

Run for Desktop Application

This option supports the user to run the test cases against the desktop application. Select the “Run for desktop application” option to run one or more test cases associated with a desktop application. If this is the first time you are executing testcases with “Run for desktop application” option, the system will prompt you to download and install Test Runner before executing the test cases, as shown in Figure 5-28.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig28_HTML.jpg
Figure 5-28

Desktop runner

Download and install Test Runner. Any new test case run against the desktop application will launch the desktop-based Test Runner and capture the desktop environment details while running an application installed in the system.

Run with Options

The “Run with options” dialog lists the options for executing the test cases based on various existing tools. Based on the selection of test type and runner, the actions will change. For example, if you are planning to execute an automated test case using a release stage, then the required input data will be something like shown in Figure 5-29.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig29_HTML.jpg
Figure 5-29

“Run with options” window

The different options available for the test type and runner are as follows:
  • Manual tests using browser-based runner

  • Manual tests using test runner client

  • Automated tests using release stage

  • Manual tests using Microsoft Test Manager 2017 client

  • Manual tests using Microsoft Test Manager 2015 or earlier client

There are other options available against each of the test cases, as shown in Figure 5-30.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig30_HTML.jpg
Figure 5-30

Test case, options

The options are as follows:
  • View execution history: This shows the execution history of the test case.

  • Mark Outcome: This marks the outcome as pass, fail, block, etc.

  • Run: This runs the test cases using any of the previously mentioned three options.

  • Reset test to active: This resets the test to active mode.

  • Edit test case: This edits the test case.

  • Assign tester: This assigns a tester.

  • View test result: This views test results.

Chart

The Chart tab provides two different options to create charts based on test cases and results.
  • New test case chart: Select this option to create a new chart based on the test cases grouped under this test plan, as shown in Figure 5-31. The user can select a specific chart type and associated parameters to define a chart that brings some insight.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig31_HTML.jpg
Figure 5-31

New test case chart

  • New test result chart: This option supports the creation of a new chart based on the test case results, as shown in Figure 5-32.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig32_HTML.jpg
Figure 5-32

New test result chart

In this case, there are fewer chart types available compared to the test case charts. Moreover, the “Group by” parameter will be based on the test run and result only.

Once charts are added, they can be added to dashboard. The “Add to dashboard” option helps the user add the specified chart to the team dashboard to bring the insights to the dashboard, as shown in Figure 5-33.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig33_HTML.jpg
Figure 5-33

“Add to dashboard” option

The “Test plan settings” window, shown in Figure 5-34, is where you can set the test run settings and test outcome settings.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig34_HTML.jpg
Figure 5-34

Test plan settings

Progress Report

This section shows the test plan, suite, and test case execution and results in a dashboard. The user can filter a report based on various criteria such as the test suite, date range, tester, etc. Figure 5-35 shows a sample progress report.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig35_HTML.jpg
Figure 5-35

Progress report

Parameters

The Parameters section shows the shared parameters defined across the project along with their values and related test cases. Figure 5-36 shows the parameters defined in earlier sections.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig36_HTML.jpg
Figure 5-36

Parameters

The user will be able to add new shared parameters along with the values. The user can turn on or off the related test case view.

Configurations

The Configurations section deals with the different test environment configurations and associated configuration variables.
  • Test configuration: This defines a set of configurations that can be used for executing a particular test case. For example, when the user tests the platform, the configuration will contain the supported OS such as Windows, Mac, or Linux along with other configuration variables.

  • Configuration variables: This is used for defining the variables, which can be configured for test configurations, as shown in Figure 5-37.

../images/501036_1_En_5_Chapter/501036_1_En_5_Fig37_HTML.jpg
Figure 5-37

Test configurations

Runs

The Runs section shows the list of recent runs along with run result, as shown in Figure 5-38. The user can select a specific run using the run ID or using the Filter tab.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig38_HTML.jpg
Figure 5-38

Test runs

Also, the user can select one run by double-clicking the run’s row and view the complete details associated with it, as shown in Figure 5-39.
../images/501036_1_En_5_Chapter/501036_1_En_5_Fig39_HTML.jpg
Figure 5-39

Run details

Load Test

The load testing feature allows you to conduct performance testing on an application. Understanding end-to-end load testing requires a good understanding of performance testing. The load testing feature in Visual Studio and in Azure DevOps is being deprecated; please refer to the Microsoft blog at https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/ for more details.

Summary

Azure DevOps provides a set of test features to do the end-to-end test management. Test plans, test suites, and test cases are the fundamental elements for test management along with other supporting features. Moreover, the integrated platform provides an end-to-end integration between test cases and requirements, implementations, defects, and other artifacts, which enables full traceability.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.60.35