© Gennadiy Alpaev 2017

Gennadiy Alpaev, Software Testing Automation Tips, https://doi.org/10.1007/978-1-4842-3162-3_4

4. Running, Logging, Verifying

Gennadiy Alpaev

(1)Dnipro, Ukraine

When running your tests against tested application, it is very important to have a detailed report with comprehensive messages about any issues tests faced during the run. It is also important to decide how to organize your tests’ run and how often tests should be run. This chapter describes some of the best practices in software testing automation related to running tests and creating logs for further investigation.

4-1. Run Scripts as Often as Possible

Generally, it is useful to run tests when a new build appears. But what’s the use of the tests if they are unstable and fail with errors even if the application runs correctly?

When a test is just created, there may be a lot of unforeseen details in it. For instance, will the test work normally on a slower computer , in a virtual machine, with other settings, or with a slower network connection? And vice versa? How will the test work in the better test environment? What happens if the amount of data on the server is different each time and the speed of the application changes every time?

To stabilize your tests, run them as often as possible. The more often they run, the more likely you are to spot problems and immediately fix them.

There is no need to run tests each time on different builds. You can use the same build for multiple runs. This is especially true at the stage of introducing automation, when there are not many tests and it doesn’t take much time to complete them all. It will become more difficult to run all tests when you have more and more tests in suite, but if you run scripts often and fix different tests-related issues, then in time your tests will be much more stable, and there will be no need for such regular runs on the same build.

Running scripts often is especially helpful for tests that work for a long while and depend on many factors, or in which a large number of verifications are performed. Such tests should be debugged the most thoroughly, as in the future you may want to run them not very often (for instance, once a week, not for each build), so they must be very reliable.

4-2. Perform an Automatic Restart of Failed Tests

Sometimes tests fail when running them automatically, but pass when running each of the failed tests separately. One of the possible reasons is the error that appears when the application is being used for a long time, or the problem with specific scenarios. Such cases need to be investigated to find their causes and reproducible scenarios and then to be fixed.

But sometimes such problems arise because of the specific test environment or the interaction of the automation tool with the application under test. In such cases, tests can hang for no apparent reason or simply report strange-at-first-glance errors that cannot be reproduced. If this is the case, it’s necessary to do an automatic restart of tests that fail for unknown reasons. Such a process should look like this:

  1. During the test run, the name of each failed test is added to the list.

  2. After all the tests have worked, we must restart our automation tool.

  3. We individually run each of the fallen tests, recording the results separately from the first results.

  4. Then we manually review the results.

If one of the tests still regularly fails, it makes sense to look more closely at what the problem may be. If you cannot see the obvious problem, but the tests were successful the second time, you can consider them successful.

Be careful, however! There is always a possibility that the test fails for the first time due to errors in the test itself or the tested application, and restart of the test is nothing more than hiding the problem. In such cases, however, it makes sense to understand the causes of errors and eliminate them. To identify such tests, you should keep statistics on all startups and from time to time view it for the presence of “suspicious” tests.

4-3. A Disabled Test Should Be Provided with a Comment

Sometimes you have to temporarily disable an existing test. For instance, the need to disable can occur if the test produces an error that affects other tests, or the corresponding functionality is temporary disabled in the application under test.

When disabling a test, you should necessarily write a comment to the disabled test, so that any person, stumbling upon it, immediately knows the reason for the disabling. The comment should indicate the author of the disabling, the date, and the reason (preferably with the defect number, if such was entered in the tracking system).

Comments on disabled tests will help the author of the disabling (if, after a long time, you have to remember the reasons for the disabling) as well as other testers (for instance, if the author leaves the company and the connection with him is lost).

You can go further and implement an automatic verification of whether the corresponding defect is relevant at the time of the test run. If the defect is already closed, you can either automatically run the test, or generate an error stating that the test should be enabled.

It is also useful to view the disabled tests from time to time and update them, if necessary. For instance, after some time the functionality that was tested by the disabled test can be completely removed from the application. In this case, it makes no sense to store the corresponding test.

4-4. Errors in Logs Should Be Informative

Imagine that you come to work, open the nightly test reports, and you see an error message like the following:

ERROR: incorrect value

What does the text of the error tell you? Nothing!

There are a few components missing: the expected value, the actual value, the place where the error occurred, and the actions that led to this result.

For instance, we test a simple Calculator application such as you might find in Windows or in OS X by entering a lot of different mathematical expressions and verifying the results. An informative error message would look like this:

ERROR verifying result for expression "2+2*2". Expected: "6", actual: "8"

Pay attention to the quotation marks that enclose the values. They are not mandatory, but it is desirable to use them in case there are spaces or other nonprinting characters at the beginning or end of the line. When each of the values is quoted, similar problems are easier to discover.

If possible, you can also arrange the expected and actual values on different lines, one under the other. In this case, it’s also easier to see the differences, especially in the case of long strings.

4-5. Make a Screenshot in Case of Error

No matter how detailed your logs are, nothing will replace a screenshot taken at the time an error occurred. This is especially true in GUI applications where the following occur:

  • It is always easier to understand the error visually.

  • It happens that the application under test is affected by something that could not be foreseen (for instance, there appeared a system message that caught the focus).

Some automation tools go further, suggesting you to create a screenshot each time the automation tool interacts with the tested application. Doing so is not recommended, because such a large number of pictures increases the size of the log, and the need for these screenshots is extremely rare.

If your tool does not have the option of automatic screenshot in case of an error – extend its capabilities by yourself so that it happens automatically. At the same time, pay attention to some features of different types of applications:

  • Desktop applications rarely contain scrolling pages. All controls are usually either placed in one window, or several transitions between different windows are used with the help of a Next button.

  • With web applications, a long page you need to scroll through to see all the content is quite common. You might want any screenshots to capture the entire scrolling region.

Often, tools allow you to take either a screenshot of the screen or a page, and for these actions you may need to call different functions. Therefore, when working with a web application and saving a screenshot, always think of what kind of information you need.

If you need the content of the entire page, then save the page exactly. If you need a screenshot (for instance, to see not only the browser window, but also other applications), then use the method of saving the entire screen, while remembering that some of the page content may not fit into the image.

4-6. Check the Accuracy of Tests Before Adding Them to the Regular Run

So, you wrote a test and it runs successfully. Let it be a simple calculator test that tests the expression “2 + 2”. It will look like this:

function test_calculator()
{
  calculate(2+2);
  verify_result(4);
}

How can you guarantee that the test will output an error if suddenly the result is five? The solution is simple: change the expected value from 4 to any other value and run the test again. Do this for each verification. Do you like the result? Then feel free to put the test in the version control system and add it to regular runs to the rest of the tests, but don’t forget to return the correct expected values before that!

There are more complex cases. For example, when a window is filled with data, close the window manually. How will the test behave? How will your automation tool behave as a whole?

You need to make sure that your tests behave correctly in difficult conditions, and your automation tool doesn’t freeze or crash. When the tests and the tool behave predictably, it is always easier to quickly understand the causes of failed tests.

4-7. Avoid Comparing Images

Very often novice automation engineers make the mistake of comparing images rather than results. For example, they cannot check the individual properties of a control, so they verify a screenshot of the element, or even of the entire window, against a known good image of the element or window.

This approach of comparing screen shots is bad for several reasons:

  • The slightest change in the appearance or size of an element leads to an error.

  • Comparing images is much slower than comparing the properties of the same element.

  • Updating the expected results for such verification points is usually more time consuming than updating the properties.

Usually verification of screenshots of the elements results from a lack of knowledge of the automation tool (provided, of course, the tool supports the type of your application under test and this particular control). It is better to spend a few days figuring out how to work with your application than to spend several hours a week in the future on supporting what you can avoid in general.

Nevertheless, although verification of screenshots is considered a bad style in automation, there are several cases when the approach can be used:

  • Some tools work only with screen shots; this makes them universal for any application; however they are relatively slow and their tests are less stable.

  • If you are testing an application that works with graphics, then the comparison of screenshots is usually the only possible approach for performing verifications.

  • If you still can’t work with the control, it is better to use screenshots than blindly click on the coordinates of the window .

In these cases, it usually makes sense to set up the advanced settings if they are provided in your automation tool. Settings to look for include the following:

  • Inaccuracy confidential interval (may be called threshold or tolerance) – allows you to ignore a certain number of differences, specified in pixels or percentages.

  • Transparency – allows you to specify an area inside the screenshot that must be ignored during the verification (for instance, there may be a Date field, which changes every day).

  • Partial comparison – compare not a screenshot of the entire control, but only the significant part of it (for instance, for a button it is enough to verify the area where the text is located).

The set of available options depends on the tool you use. Some tools provide a wide set of options for image comparison, while others don’t have any options at all. If you are unlucky enough to use a tool without necessary options, you can write your own functions to perform comparison of the images, though it may be a tricky task to implement.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.19.243