CHAPTER 21

image

Manual Testing

Great quality does not happen by itself. We need to build quality into the process to get great results. The pillars of ALM (traceability, visibility, and automation of processes) should be part of the testing effort to give us predictable quality in our projects.

In this chapter we look at how Microsoft Test Manager (MTM) can be used to manage the testing process. But before we dive into the details of working with MTM, let’s take a look at what we mean by a testing process. In Figure 21-1 we have an example of a common model for testing.

9781430243441_Fig21-01.jpg

Figure 21-1.  A manual testing process

First we start with planning, this is where we look at the requirements planned to be implemented and decide on how much testing is needed to validate that the requirements are implemented correctly. When we have a plan we can design our testing efforts; this includes of course writing test cases but also setting up test environments and test configurations. With the test assets in place, we can run tests of different kinds; scripted tests and exploratory tests, as well as automated tests. If things go wrong, we file bugs and track how the bugfixes are coming along. When a bugfix is ready to be verified we want it to be simple to get back to the failing test case and re-run the test to verify the fix. Finally after going through this cycle for a couple of times, we want to find our candidates for regression testing. Then we can choose to automate those tests so that we can have quicker test cycles in coming sprints.

This may seem like a waterfall model, and yes, the process is sequential, but we still can work iteratively and incrementally. In an agile project we plan, design, test, and so on in every iteration. Activities such as integration tests and test automation often need more of the release completed to perfom these activities. It is then we adapt and plan those activities for a sprint when the prerequisites are in place.

Now that we have an idea of a testing process let’s look at how MTM gives us tooling to work according to the process in an effective way.

About Microsoft Test Manager

Microsoft Test Manager is the Visual Studio for testers, the one-stop shop for the entire test process. A tester can do all the testing activities within a single application (not entirely true, but pretty close actually).

At a high level Microsoft Test Manager handles

  • Test planning
  • Test design
  • Test execution and test run analysis
  • Rich bug reporting with data collection from the machines under test
  • Work Item tracking (including bug tracking, of course)
  • Test environment management

Sounds like a good match to our proposed test process, doesn’t it?

Figure 21-2 shows how artifacts in TFS and MTM are related in the context of a test plan. We will look at the details of each in the coming sections in this chapter.

9781430243441_Fig21-02.jpg

Figure 21-2.  TFS and MTM artifacts related to a test plan

image Note  Is there a Web UI for the tester? No, the Microsoft Test Manager client is a rich desktop application that needs to be installed where you plan to work with the test assets. Some components such as test cases and test results can be accessed using other clients but you will only get partial functionality outside MTM. There is a third-party solution available as a plug-in to the TFS Web Access called the Web Test Manager, which may be an option: http://www.selagroup.com/alm/products_WTM.html.

Connecting Microsoft Test Manager to TFS

MTM is always connected to the TFS server so the first thing to do when starting MTM is to connect to the TFS project we are working on (see Figure 21-3). The connection to TFS is not only in design-time but also when running tests. This is something to be aware of if we want to perform acceptance testing in a customer environment where typically there is no access to the TFS server.

9781430243441_Fig21-03.jpg

Figure 21-3.  Connecting Microsoft Test Manager to your TFS project

image Note  If your TFS is published over HTTPS or on a different port, just fill in the entire URL. For example https://alm.tfspreview.com connects MTM to a TFS collection on the Team Foundation Service cloud service.

Planning the Tests

In the previous chapter we looked at test planning and what a test plan typically contains. The test specification part of the test plan is something we can directly map to artifacts in TFS and MTM.

What Is a Test Plan?

In MTM a test plan is essentially two things—details about test effort and the set of tests to run as part of the plan (called suites in MTM).

We recommend keeping test plans small; we prefer one test plan per sprint over one for the entire the release. Small plans are more to the point and maps well into the test process. If we look at the status of a test plan, we can grasp what it means. If the plan covers the entire project, it is much harder to understand whether we are progressing as planned. The data in small plans can still be aggregated into reports over the entire project, we just have to use TFS reporting to do so (see Chapter 32 for more information on how to do that).

Creating the Test Plan

First, we need to create a test plan.

For our scenario we will start with nothing and create a test plan for the first sprint as shown in Figure 21-4.

9781430243441_Fig21-04a.jpg

9781430243441_Fig21-04b.jpg

Figure 21-4.  Adding a new test plan

Test Plan Properties

The properties of the test plan contain general information about the test plan, details on how test runs are set up, as well as links to documents and other resources. The links are useful when we want to add context to the plan that does not have a place in MTM, for instance we can reference a test plan document containing all details about the testing for the release we are working on. Figure 21-5 shows the test plan properties.

9781430243441_Fig21-05.jpg

Figure 21-5.  Test plan properties

Suites

Suites group together the tests we want to run and track in this plan. We can choose from three types of suites:

  • Static suite: The content of this suite is manually added test cases.
  • Query-based suite: A query-based suite lists all test cases matching a given work item filter.
  • Requirements-based suite: This suite shows the test cases associated with a selected TFS requirement.

The Query-Based suite (see Figure 21-6) is great for any situation where you want to make sure you have an up-to date list of tests based on some criteria. Typical usages are suites of tests for a specific application area or all automated tests.

9781430243441_Fig21-06.jpg

Figure 21-6.  Query-based suite for all automated test cases

The Requirement suite is a little different. Here we use a work item category called that maps to the configured work item type(s) representing a requirement. In our scenario using Scrum this would map to Product Backlog Item and Bug.

image Note  If you want to see which work item types are mapped to the Requirement Category in your project you can use the witadmin.exe tool and run the following command:

witadmin exportcategories /p:your_tfs_project /collection:your_tfs_collection_url.

We would typically add all requirements in the sprint to the test plan to associate the acceptance tests with the corresponding requirement. Figure 21-7 shows how we use a work item query matching the Requirement Category and Sprint 1 to find the requirements that we now can add to our plan.

9781430243441_Fig21-07.jpg

Figure 21-7.  Adding requirements for Sprint 1 to our test plan

image Note  Removing a test case from a requirement deletes the link to the requirement and therefore affects other plans using the same test case/requirement association.

The complete structure for the Sprint 1 test plan is shown in Figure 21-8 with placeholders for requirements tests, exploratory tests, and automated tests.

9781430243441_Fig21-08.jpg

Figure 21-8.  Complete structure for the Sprint 1 test plan

Moving On

We now have a test plan setup. At this point we can choose to continue and do test case design, or start testing by running exploratory tests. In MTM 2010 we needed to have at least one empty test case to run a test, but with MTM 2012 that requirement is removed, so you decide what your needs are. We start by adding test cases first.

Designing Test Cases

At this point we are ready to add some test cases to our plan. Specifically we want to add test cases to test the acceptance criteria for the requirements in our sprint, but we can add any type of test case.

What Is a Test Case?

A test case in MTM represents the test instruction for a tester. It is implemented as a TFS work item, which means we can customize it so that it contains the information the tester needs to complete the test run. The test case can be viewed and changed in any TFS client except for the test steps, which can only be done in MTM. Let’s walk through the essential elements of a test case.

Steps

The steps section is of course the central part of the test case as shown in Figure 21-9. We add steps for the test instructions and provide expected results. The expected result is particularly important to spend some time thinking about because these are the validation points that we use to assert that the test case is testing the right thing. If the expected result is well formulated we can use it to validate the test step in a manual test as well as if we automate it, saving time and making the test runs more repeatable.

9781430243441_Fig21-09.jpg

Figure 21-9.  Test case with formatted steps

Use formatting to highlight important sections of the test steps. Worth mentioning is that the test step is selectable when the test is being run so if you provide a URL in the test step, the tester can copy the URL and paste it into the browser instead of having to type it.

For recurring steps we can create Shared steps. Shared steps are stored as a separate work item and can be shared between test cases, for example to encapsulate the login steps which might be the first sequence in many test cases.

If we want to test multiple combinations of a test, say for instance to test how the application behaves for users of different roles, we can add parameters to the test case (see Figure 21-10).

9781430243441_Fig21-10.jpg

Figure 21-10.  Test case with paramters

Each set of parameters shows as test iterations when the test case is run And we also get the nice effect that each data value is copied into the Windows clipboard so we can paste it into the target UI element.

Test Case Summary

The test case summary contains a description field that is useful for documenting the purpose of the test case ( see Figure 21-11). This field is also shown in the Test Runner when later running the test, so use it to write reminder notes for the tester.

9781430243441_Fig21-11.jpg

Figure 21-11.  Test case summary

Creating Test Cases

In our scenario we want to add test cases to our first requirement “Create an expense report.” The requirement has acceptance criteria defined for it, which is great input to our test case design. As a start we can create one test case for each acceptance criteria and later we can add more test cases for edge cases as we find need for. The complete requirement was shown previously in Figure 20-2. Let’s start by creating a test case for the “An employee can create a project related expense report” acceptance criteria. When adding a test case for a requirement, MTM automatically creates a link to the requirements under the Tested Backlog Items tab. Figure 21-12 shows a completed test case.

9781430243441_Fig21-12.jpg

Figure 21-12.  Test case for creating a project related expense report

After adding test cases to cover the acceptance criteria, we can take a look at the product backlog item again. A small but effective feature of the Scrum work item design is that we can view the list of test cases at the same time as we see the list of acceptance criteria, as shown in Figure 21-13. This is a great way to check whether we’ve added test cases to cover the requirements.

9781430243441_Fig21-13.jpg

Figure 21-13.  Create Expense Report requirement with acceptance critera and test cases

With the test cases in place, we now have a test plan ready to start testing (see Figure 21-14).

9781430243441_Fig21-14.jpg

Figure 21-14.  Sprint 1 test plan with test cases

We can however add some additional structure to the test cases before entering test mode.

Test Configurations

The test configurations allow us to define the test matrix for our tests, for example we need to test our application on Internet Explorer 9 and 10. To do so we can create matching configurations. This is done from the Organize tab in MTM by managing Configuration Variables (see Figure 21-15).

9781430243441_Fig21-15.jpg

Figure 21-15.  Adding a test configuration variable

With the variables defined, we can add test configurations (see Figure 21-16).

9781430243441_Fig21-16.jpg

Figure 21-16.  Adding a new test configuration

Finally, we can assign each test to the corresponding configurations as shown in Figure 21-17.

9781430243441_Fig21-17.jpg

Figure 21-17.  Mapping test cases to test configurations

Assign to Tester

If we have many test cases and many testers, it can be effective for the test manager to assign test cases to the designated tester. One way to divide the work can be to assign test to users by configuration (see Figure 21-18).

9781430243441_Fig21-18.jpg

Figure 21-18.  Assign test cases to tester

Grouping and Adding Fields

A slightly hidden gem in the MTM UI is that most lists have a pivot feature that allows us to drag columns over the top of the list to group on that field.

We can also add additional columns to the list by right-clicking on the column row.

Figure 21-19 shows examples of grouped columns and column options.

9781430243441_Fig21-19.jpg

Figure 21-19.  Customizing the work item grid in MTM

Test Suite Status

There is a nice feature in MTM to help us control when tests are available for testing. Each test suite has a status we can set to In planning, In progress, or Completed. Only tests in suites with status In progress are shown in the Test view in MTM.

Moving On

To summarize, here is our shortlist for test design and planning:

  • Create the test suite structure, mark suites as In planning.
  • Create test cases for requirements and scripted tests.
  • Review configurations.
  • Assign tests to testers.
  • Review the test plan, test suites, and requirements.
  • When happy, set the suites to In progress.

With this said, let’s move on to testing!

Running Tests

The testing activity in MTM can be seen as a dashboard for the tester. Here we can do most of the tasks related to testing, including running tests and analyzing test runs, doing exploratory tests and viewing exploratory test sessions, as well as tracking and verifying bugs.

Let’s start by looking at our scripted tests first and then move on to exploratory testing. But before we start it is good to have an understanding of how the Run Tests view works. From the test view we can

  • Run the test (with options to override the plan default settings). You can select tests to run in many ways, for instance by suite or by multiselecting the tests to run.
  • View the result of the last test run for a test.
  • Open test case to read up before starting the test.
  • Change the test status. We can set the test to blocked, reset, passed, or failed without starting a test run.

Filtering Test Runs

Previously we mentioned that as a test planner we can assign tests to testers and configurations. When a tester wants to use that information we can create a filter in the Run Tests view so that we only see the relevant set of tests (Figure 21-20).

9781430243441_Fig21-20.jpg

Figure 21-20.  Filtering test runs based on tester and configuration

Working with the Test Runner

Now we are ready to run our first test. Starting the test in MTM opens up the Test Runner, which is another part of the application. The Test Runner starts in a mode where it takes over the left part of the screen and scales the other area, which is nice if you want to test your application in full-screen mode. You can change docking behavior if you want to position the window in a different way, as shown in Figure 21-21.

9781430243441_Fig21-21.jpg

Figure 21-21.  Selecting the Test Runner screen position

When starting a new test we get to choose if we want to create an action recording. An action recording is a recording of all user interaction for the test and is a script that we can use later for automatic test playback in MTM or to generate an automated test in Visual Studio.

The Test Runner displays the test steps that we can mark as passed or failed as we go through the test case (see Figure 21-22). If the test step contains parameters we can bind those to the application we are testing by pasting the data value from the clipboard, which speeds up testing multiple iterations. The parameter value is copied into the clipboard by default when you move to the test step containing the parameter. If you want to copy the parameter explicitly you can do so by just clicking on the data link.

9781430243441_Fig21-22.jpg

Figure 21-22.  Working with a test case in the Test Runner

If the test step has a validation point, it is also displayed in the test step description.

When we are running multiple tests and/or iterations we can easily switch between them using the navigation control in the upper-left part of the Test Runner (see Figure 21-23).

9781430243441_Fig21-23.jpg

Figure 21-23.  Moving between tests and iterations in the Test Runner

One feature in the Test Runner that can be difficult to spot is the test case summary. If you want to read the summary there is a little expander link just above the list of test steps (see Figure 21-24).

9781430243441_Fig21-24.jpg

Figure 21-24.  Viewing the test case summary when running the test

Another nice feature is the possibility to quickly switch between the running test and MTM by clicking the little window icon in the top toolbar as shown in Figure 21-25. Switching from the test run back to MTM pauses the current test but only for the length of the MTM session. If you close MTM, the test run will be marked as failed.

9781430243441_Fig21-25.jpg

Figure 21-25.  Switching between Test Runner and MTM

When in MTM, the test run is shown as In progress and we have the option to resume manual testing to get back to the Test Runner window (see Figure 21-26). We can only have one test pending; starting a new test run with another already running ends the running test.

9781430243441_Fig21-26.jpg

Figure 21-26.  Resume manual testing

When the test is completed, we get back to the test view. If we want to view the data from the test run later we can just press the View result button and get to the details from that particular run for the test case.

Analyze Test Runs

After the test run is complete we may want to analyze the result or have someone else take a look at the findings. We can always get to the latest result from the Run Test view, but if we want to work with test runs in general we need to switch to the Analyze Test runs view. From the main view we can do basic filtering and grouping of the test result. Note that by default the view is set to show results from automated tests, something that may not be what you expected but primarily this view is used for following up automated test runs. Figure 21-27 shows the Analyze Test Runs view.

9781430243441_Fig21-27.jpg

Figure 21-27.  Analyze Test Runs view in MTM

A failed test run is shown in state “Needs Investigation,” which we can fix by opening the test run and analyzing the result and then marking the run as completed (see Figure 21-28). Marking the run as completed won’t change the status of the test result for the test cases in the run—just show that we have taken action on the test run result.

9781430243441_Fig21-28.jpg

Figure 21-28.  Working with Test Runs details

If we scroll down to the Tests section in the report we get a list of the tests that were run in this test run (see Figure 21-29). Here we can drill down and look at the details of a test run and raise a bug afterward. We can also make a decision about the cause of a problem by selecting failure type and resolution.

9781430243441_Fig21-29.jpg

Figure 21-29.  Test Runs analysis actions in MTM

Opening the result of a particular test shows the details of that test run. One detail to pay a little extra attention to is the Result History list. The result history is a good way to learn more about the test when troubleshooting. If the test works all the time, we probably have introduced an error. If the test sometimes passes and sometimes fails, we could have a regression issue here. Or there may be a problem with the test case, perhaps we need to add more information in the test case so that we can make sure it is run the same way every time.

image Note  Test results are stored per test plan. If you want to track test results consistently it usually works best to have small test plans, each with a distinct purpose.

Running Exploratory Tests

We can also run exploratory tests in MTM. This is a new feature in MTM 2012 that allows us to very quickly start testing without having to create a test case up-front. The test experience is similar to the one we have in the standard Test Runner but because we do not have a test case behind the scene, it is naturally more lightweight. There are several ways to start an exploratory test. The Do Exploratory Testing view in MTM is the most common, but we can start an exploratory session from a requirement or the test plan as well.

The Do Exploratory Testing activity (see Figure 21-30) has some features to be aware of. It is quick to start exploring by just pressing the Explore button, but we can also choose to explore specific work items by selecting one or more from the Work Item list. Running an exploratory test on a work item is the same experience as without but we get an association with the work item that we can use later on (for statistics or to create a test case with link to the tested requirement for instance).

9781430243441_Fig21-30.jpg

Figure 21-30.  Exploratory Test view in MTM

Starting the test opens up the exploratory test session. This is a simple version of the standard test runner that we use to document the test session as we run the test. We can do rich text editing and include screenshots as needed. We can even double-click on the screenshot and open it in an image editor to format or annotate it. Figure 21-31 shows the exploratory test runner.

9781430243441_Fig21-31.jpg

Figure 21-31.  Exploratory test runner

If we find a problem we can create a bug directly from the test session. The same thing for a test case, if we realize when testing that this session should be kept as a scripted test then we can quickly press the Create test case button to create one. Both of these features copy the result from the exploratory test into the bug or test case for reference. When you create a bug from an exploratory test it is possible to change which steps to include in the bug by clicking on the “change steps” link in the Steps to Reproduce section. This is good if it was a long exploratory test session and you don’t want to get too much noise in the bug report.

Note that the test case does not have a test result the way a scripted test run does, an exploratory test is simply just run. If we find an issue, then we raise a bug from the exploratory test rather than fail the whole test run.

Test Settings

When we run tests we can manually add content to the test results such as attaching files, including screenshots, or writing comments. This is great for traceability and for test run analysis but for bug fixing it is also practical to get detailed information about the system under testing so that a developer can reproduce the problem quickly. For a tester it is often difficult or time-consuming to manage this part of the test run so to solve that problem test platform configures how tests are run, as well as what data gets collected. We control this with test settings in Visual Studio or MTM.

image Note  Visual Studio Test Settings are covered in Chapter 16.

The default settings for test runs can be assigned to the test plan on the test plan properties view. We also can override the default by choosing Run with Options when starting a test run. Either way, we get the test framework to locate a test setting we have configured earlier.

To manage test settings we switch to the Lab Center in MTM and select Test Settings. For manual tests it is very straightforward to create a new test setting; first we give the test settings a unique name, then we select the environment where the tests are run (see Chapter 23 for more information about test environments), and finally we add the diagnostic data adapters to use in the test run.

For automated tests we can also configure how the test environment should behave during test, for instance we may need to deploy files to the environment before running a test or to execute pre- and post-test run scripts to initialize and cleanup the environment.

Data Collection

The central part of a test setting in MTM is the Data and Diagnostics section where we can specify which data collectors we want (see Figure 21-32). Most of the data adapters are configurable to help us fine-tune the data collection for best result.

9781430243441_Fig21-32.jpg

Figure 21-32.  Creating a new Test Settings

The built-in data adapters in MTM 2012 are listed here:

  • Action log: Used to collect UI interactions when the test is run.
  • ASP.NET Client Proxy for IntelliTrace and Test Impact: Used to collect IntelliTrace and Test Impact data from a web server.
  • Event Log: Capture events from the event log, which event to capture, as well as how many events to collect can be configured.
  • IntelliTrace: Used to collect run-time and exception information from the system under test that can be used by developers to speed up the time it takes to understand the cause of a problem.
  • System Information: Gathers system information from a machine, such as amount of RAM, operating system, and browser type.
  • Test Impact: Collects test coverage data used to calculate test impact so we can get help to decide which tests to re-run based on code changes.
  • Video Recorder: Records the desktop where the tests are run. Can be configured to only store video recordings on failed test, which can help reduce the amount of data stored in the TFS database coming from test results.

Most of the adapters also lets us configure how the data collection should work, for instance by controlling whether a video recording should be saved for successful test passes as shown in Figure 21-33.

9781430243441_Fig21-33.jpg

Figure 21-33.  Configure test settings in MTM

image Note  If you have data in your application that could help troubleshooting a bug, then you can extend MTM by creating a custom diagnostic data adapter. A custom adapter gets called by the test infrastructure during test execution and can, among other things, pass files to the test engine when a test case completes. More information on how to create a custom diagnostic adapter can be found on MSDN: http://msdn.microsoft.com/en-us/library/dd286737.

When we later want to use the test settings we either assign it to the test plan or choose the test setting when we start a test run by selecting Run Options (see Figure 21-34).

9781430243441_Fig21-34.jpg

Figure 21-34.  Specifying Test Settings when starting a test

image Note  See Chapter 18 for information on IntelliTrace and how that can be used by developers to reduce the time it takes to reproduce a problem.

Typically we will create a test setting per type of test scenario, for instance local testing, detailed diagnostics, and remote testing. The recommendation is to use as cheap test settings as possible. This way we can speed up testing and reduce the amount of diagnostic data that gets collected and when needed we re-run tests using a different setting to gather more information.

image Note  The test results stored in TFS can quickly fill up the TFS database so be conservative with what data you save from test runs. The size of the TFS database does not only affect the operational performance, but may also slow down maintenance jobs such as backups. The TFS Power Tools contain a test attachment cleaner tool that can be used to remove unnecessary test artifacts (http://msdn.microsoft.com/en-us/vstudio/bb980963.aspx).

Integration with Builds

So far we have worked with test plans and test cases without a direct relation to the system under test. Okay, we have looked at how we can use product backlog items to document acceptance criteria. We can also use the backlog as a tool for planning which features are implemented when. But when it comes to keeping track of which version of the system we are testing on or to track in which codebase a bug has been integrated in we need more.

One solution to this challenge is to use TFS builds. With TFS builds we can track which code we are testing by assigning a build to a test plan or when starting a test run. Also by having the build associated with the test run we can tag bugs created from the test run automatically. When a developer checks in code and associates the changeset with a work item, the TFS build can, if configured to do so, tag the work item as integrated in that particular build. So as you see we get a lot of nice additional capabilities in place by associating builds to tests. Let’s now take a look at how to set it up.

Assign Build

The Assign Build view in MTM is useful for assigning a build to the test plan as well as to find out what has been changed in a build since the one we are currently using. The latter can be a good tool for us to help figure out whether we should deploy a new build or wait until more features have been completed. In Figure 21-35 you can see there has been one bug fixed from the build in use compared to the latest build.

9781430243441_Fig21-35.jpg

Figure 21-35.  Assign Build and looking at changes made between builds

image Note  If you are not using TFS builds in your project you can still get some or all of the functionality by creating your own fake builds. The fake build needs to be created so that it includes the information to be used by MTM, for instance a drop folder with build binaries or just the build number to be listed in the “Found In/Integrated In” fields in the bug report.

For more information about how to use the TFS API to create a fake build see http://blogs.msdn.com/b/jpricket/archive/2010/02/23/creating-fake-builds-in-tfs-build-2010.aspxt.

If you just want to get the build into TFS check out Neno Loje’s command-line tool that wraps the API mentioned above: http://msmvps.com/blogs/vstsblog/archive/2011/04/26/creating-fake-builds-in-tfs-build-2010-using-the-command-line.aspx

Recommended Tests

Another interesting feature in MTM is Recommended Tests. Recommended tests help us decide which tests might have to be run. The calculation is based on data from previous test runs matched against changed code checked in to TFS. This feature is called Test Impact Analysis and works for any test run against managed code. To resolve the changes we must also use TFS build, this is used for two things; to define the baseline to do test impact analysis against and then we use another build to calculate the difference.

image Note  Only test cases are shown in the Recommended Tests view in MTM. For automated tests we need to associate the automated test method with a test case for it to show in Recommended Test view. See Chapter 22 for guidance on how to connect an automated test to a test case.

To get started with Test Impact Analysis and Recommended Test use the following procedure:

  1. Create a test setting in MTM that collects test impact data.
  2. Create a TFS build.
  3. Assign a build to the test plan in MTM. This is used as the baseline for determining recommended tests.
  4. Run a manual or automated test. Only successful tests are used to generate impact data, using partially successful could give unpredictable result.
  5. Write code, check in, and create new builds.
  6. Select a new build and compare it against the previous build to find recommended tests.

Recommended tests should be used to get an indication of tests to run; there are many factors that can affect the outcome of the test impact analysis so we recommend using it together with other techniques to select the tests to run.

Reporting Bugs and Validating Fixes

At this point we have looked at test planning, design, and different ways to run tests. The final thing to deal with when testing is the inevitable case when you discover an error in the application.

The features in MTM for bug management are designed to be simple to use, as well as help us speed up the time it takes to turn a bug around from found to fixed to verified. This may sound like utopia but thanks to the integration in the test runner applications, as well as the data collectors we can actually manage the bug processes in a really nice way using MTM.

Creating a Bug

The first part is to create a bug. Creating a bug can be done from both the Test Runner and the Exploratory Test window (Figure 21-36).

9781430243441_Fig21-36.jpg

Figure 21-36.  Creating a bug when running a test

When a bug is created, data from the diagnostic data adapters is collected and added to the bug report. The test steps are also copied into the bug form which makes it really simple and quick to report a new bug. In fact it is so easy to do that we can report bug for any problem we encounter, be it during development or test. Figure 21-37 shows an example of a bug report with rich test run data attached.

9781430243441_Fig21-37.jpg

Figure 21-37.  A bug report created from a test run in MTM

It is sometimes argued whether it is a good practice to report bugs in a sprint where the tested feature is still under development. Our recommendation is that it is better to report the bug and let the team decide when to deal with it rather than distracting a developer with the issue right away.

image Note  How does MTM know which work item type is a bug? MTM uses the Bug work item Category to find the default type for a bug and open that form. If your work item type for bugs is called Defect instead of Bug you can update the work item category to reflect this.

Reporting bugs directly from the test run is of course the most common way to do it. But what if we forget or didn’t think it really was a bug and want to add one later? Rather than having to create a bug report from scratch we can instead use the test run result (see Figure 21-38).

9781430243441_Fig21-38.jpg

Figure 21-38.  Create a bug report from the rest run result

Verifying Bugs

After a bug has been filed it will go through the process of triage and development until it is eventually fixed and ready for test. To make the process of finding the state of a bug easy we can use the Verify Bugs view in MTM (see Figure 21-39).

9781430243441_Fig21-39.jpg

Figure 21-39.  Verify bugs in MTM

A good way to work with this view is to look at the state and assigned to track the progress of the bug. In the previous example we can see that the bug has been approved and committed by the team to be fixed. No one is working on it yet because the Assigned To field is empty. When the developer fixes the bug, the check-in should eventually become part of a TFS build which in turn can update the Integrated In field on the bug as shown in Figure 21-40.

9781430243441_Fig21-40.jpg

Figure 21-40.  Verify bugs in MTM with integrated in build set

image Note  Why is the Verify button sometimes disabled? We can only use the Verify workflow when the bug is associated with a test case. MTM uses that information to open up the test case for us when we go and verify.

Fast-Forward Playback

When we verify a bug fix we can either run the test case again or we can use a feature in the test runner called fast-forward playback. As the name implies, this feature allows us to replay a test session and it does so by using an existing action recording associated with the test case. As you can see in Figure 21-41, each test step with an associated action recording is shown with an orange line next to it, this information also tells us a little about how the actions were recorded. To use the feature with the best result we recommend you focus on getting a clean recording and make sure to mark each step correlated with the action log. There is no way to edit the action recording later so the only option is to re-run the test and save a new action log.

9781430243441_Fig21-41.jpg

Figure 21-41.  Fast-forward playback using MTM

The playback feature is of course just as useful (perhaps more) during normal testing as well. How great to be able to run a regression test with the click of a button!

image Note  Action recordings are saved per test plan which means that a test case can have a different action recording in different scenarios.

Summary

In this chapter we have covered a lot of ground and you have seen how Microsoft Test Manager can be the tool for almost all activities in the test process. We can do test planning and assignment, work with test case design, and run tests from scripts or as exploratory test sessions. From the test runs we can analyze and file bugs for errors that we find and track the fixing process—all within one single application.

In the next chapter we look at how we can automate testing and save work in the process. Visual Studio has a wide range of tools for automated testing, ranging from UI testing to stress and load testing.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.186.178