7.3. Beyond Simply Breaking the Application

We have discussed the relationship between automated and manual testing, but as we mentioned there are times when you can't automate the application. This is where you need to use manual testing to solve these kinds of issues.

7.3.1. Usability

One key area within the banner of manual testing is usability. Although many projects have dedicated usability testers it is still the responsibility of every tester to ensure that the application can be used effectively by the user. The most important part when testing the application is to view the application from the viewpoint of the user. Ignore any prior knowledge, and treat it as if you were simply using it on a day-to-day basis. You want to make sure that the application is clear and understandable for everyone who needs to use it. It's pointless having a web application that can only be used correctly by the team who built it.

When using the application, make sure that all the pages have a very clear UI and you know exactly what you are expected to do and how to proceed. For example, with a site such as the WroxPizza example it is important to ensure that people know how to add items to their order so they can purchase them. If this is confusing, then the chances of them using the site and purchasing an item would be reduced. You also want to make sure that the pages are designed in a clean fashion. Can you see all the relevant information on the page without having to constantly scroll around? Can you understand the layout and how all the sections of the page relate? Asking these questions is important to ensure that the application can be used successfully.

On the topic of ensuing that you can see everything, it is important to test the usability of the application against the site using different screen resolutions. Even though cross-browser testing is important, testing the user's platform is also important to ensure that the user can still use the application. It's very commonplace for developers to create a site on a 30-inch widescreen which looks amazing, clear, and understandable but when put onto a 17-inch screen at 1024*768 it is cluttered, hard to read, and very confusing because of the space limitations. The same is true for larger screens — you want to make sure that the rendering on a 30-inch screen takes advantage of the space while not looking too empty.

Another item to harm the usability of applications is ambiguity. If you use terms which have multiple meanings, then users of the application may be confused by your meaning of the term. This is even more relevant for web applications as they are potentially catering to users from multiple countries and cultures. As such, it is important to refrain from ambiguity and ensure that the site is still clear for that group. Ambiguity can also be resolved by using commonplace terms and button text. If you follow the defacto standard for various parts of the application then users can automatically process the information instead of having to search for what they are looking for. For example, "Sign In," "Login," and "Authenticate" all have the same meaning; however, if users want to log in to the application they are unlikely to scan the page looking for a link called "Authenticate." Following the defacto standard allows users to scan more quickly and find what they are looking for without any effort.

Another part of web applications which commonly cause frustration for users is delay. There are a number of different ways web applications can suffer delay. The most common delay is the time is takes just simply navigating the site. If your application accesses data from an external resource such as a web service or database then this additional overhead can cause a delay to the user. Even small delays can have a huge impact on a user who is expecting the page to appear immediately. When testing the application you need to ensure that the delay when using the site is not too great. The effects of a delay can be offset by the use of Ajax allowing pages to be loaded in an asynchronous fashion; however, if the delay is long enough this can actually cause a worse experience. In the next chapter we will cover load testing in more depth.

Delays are a concern, but more of a concern are timeouts. Imagine you have a long application form and you're nearly at the end but the phone rings. After the call you return only to find that the application has crashed and lost your nearly complete form. As a user, how would you feel about having to start again? The same happens if your net drops, which with the rise of 3G and mobile working is more and more likely. Given this situation, as a user you would like the application to be able to cope with this. When it occurred, you would expect the application to have saved the information somehow so the next time you returned it was available to you. Although this would take time and consideration, I think it is important to consider as it will affect how people interact with your application. Remember in ASP.NET the default timeout is 20 minutes, so if you expect there is a chance people might leave the window open longer than that, then you need to take that into account.

Session timeouts are a problem in a large number of applications. Developers often overlook the fact that people leave browser windows open in the background. They store information in a session and expect it will always be there. Sadly this is not the case. These issues are more important for certain applications; however, they should be considered for all ASP.NET sites. However, there is a major problem with usability testing. Generally testers and developers spend a lot of time and energy in the product and as such, often automatically overlook issues as "known faults" or "by design" as they have knowledge of previous discussions within the team. For this reason, it is often a good idea to have third parties come in and help with fresh eyes and help with usability testing. They shouldn't perform all the tasks; however, they will be able to help with the general flow, if the application is understandable in the way it is structured. The results are generally very useful and interesting.

7.3.2. Documentation

Along with usability testing, documentation also needs to be tested. There are two main forms of documentation. One is the general help section your site might provide, while the other is the general wording, directions, and text which you find on the site itself.

When first testing the actual help sections of the site, the most important thing is to remember that the help actually matches the UI. There have been a number of times when we have used help systems ourselves only to find out that they are referring to an older version of the site. Any screenshots or steps should always be updated to reflect the latest version of the application. This needs to be tested to ensure it has been done. Another thing to test is that the steps actually solve the problem and help the user. It is very easy to miss a step, or provide an incorrect code sample. Simply follow the guide as the user would and ensure everything has been covered as required. Finally, you need to verify that technical terminology has been used in the correct place and way so that it does not confuse users. In some instances, technical wording might cause more problems and need to be removed depending on who your target user is.

Another part of documentation is the general wording of the site. It is important that the text on the site is correct and accurate without any spelling mistakes. It is the role of a tester to ensure that this is correct, and the text helps the user where required without being too overpowering. We have found that users generally scan read web pages; this includes documentation. Try and make your pages scannable.

7.3.3. Error Messages

In a similar fashion to documentation, error messages also need to be tested. The most important point about error messages is that they need to be helpful. Error messages which simply say "Sorry, an error has occurred" annoy end-users. They don't provide the user with any insight into why it failed, whose fault it was, or how to recover. Errors are a fact; they will occur, but the important fact is how the site handles the problem. The best error messages are the ones which provide users with some helpful information into why the error occurred. They should also guide the user on how to take the next step after the error has occurred. After an error has occurred the users are completely lost and taken out of their comfort zone when using the application. The error message should appreciate this and guide the user forward and if possible solve the error. By helping the user solve the error they will feel happy that they managed to get around the issue. If the site just completely fails with no help or information about how to proceed the users are lost and will probably just close the browser — losing you, the visitor. If you help them proceed past the error and stay on the site, you retain the visitor.

The one thing which error messages should never do is make the user feel stupid. Even if it was the user's fault, then this should not come across in the error message, as this will have a negative impact on the site at a point where the user is already unhappy because it failed in the first place.

Although manual testing does involve tasks with the aim of not breaking the application, one of the most important tasks is actually finding faults. The most powerful of these techniques is exploratory testing.

7.3.4. Exploratory Testing

Exploratory testing involves learning about how the application works internally, developing test cases, and executing test cases as a single process. Exploratory testing is designed to allow the tester to learn about the application by using the application in various ways with the aim of finding faults and bugs. While the tester is using the application during exploratory testing they are developing a core understanding of how inputs flow through the system and affect the output. By exploring the different options and attempting different systems they should be looking for issues or parts of the system which don't seem right — this could be rendering or calculation results. This exploratory process is important. When writing automated test cases you are generally very focused on a certain input with an expected output.

With exploratory testing you are free to try anything. Ideally you should try things which cause the application to break or which users might accidentally attempt. Common examples are validation errors, such as being able to enter letters into quantity fields. Exploratory testing is also the perfect time to test for edge cases and attempting tests which were not even considered during the automated testing stages. As you are trying these test cases and learning more about the system, as your knowledge increases you are more likely to understand how different inputs affect how the system behaves. This should ensure that the tester thinks of new inputs and different ways to use the system. This different way of thinking is what finds bugs and how professional testers earn their money.

After you have found a bug, it's likely that other bugs will appear within that area of the system. As a bug exists, it could be that the test case has been overlooked by the developer, and as a result it could have been overlooked in other places of the system. Attempting to dig deeper into the root cause and find more bugs is key to exploratory testing. If you simply find one bug and then move on to a completely different part of the system, a lot of the knowledge the tester has built up during the time to find the original bug would have been lost and wasted. After a bug has been identified it can help if you investigate the actual underlying codebase and identify the code which caused the problem. Having knowledge of the codebase can provide you with the opportunity to identify more bugs as simply looking at the codebase in a code-review approach can uncover issues.

Even though exploratory testing is flexible and open it should also be targeted. The best time to perform exploratory testing is after a feature has been implemented. At this point you can use the time to explore the new functionality and quickly identify any major concerns and issues. After these have been fixed you can start with implementing the acceptance tests around the functionality to verify it works as expected. However, exploratory testing shouldn't be limited to just new functionality; it should touch different parts of the system to ensure it is working as expected.

This brings us to another important fact: the learning process and bugs identified are an important stage of the lifecycle. As such they should be recorded in the correct fashion. For example, if you identify a fairly critical issue with the feature, then it could be useful to add the test case to your set of automated tests to ensure that the issue doesn't re-appear in the future. The same applies if you have identified similar types of issues within different sessions as it might indicate a weakness in your automation and should be investigated.

Other alternatives include simply recording the tests attempted as manual test cases, as we will discuss later in this chapter. Another useful approach is to actually record the screen. The advantage of this is that you can rewind the video and find out the steps taken which highlighted the problem. It's also an easy form of documentation about the test cases attempted, areas of the application covered, as well as being about to return to particular points during the lifetime of the application to remember previous behavior. When it comes to screen recording software, TechSmith's Camtasia comes highly recommended with many free solutions available.

Recording and exploratory testing is a great combination when it comes to bug hunts. Bug hunts are testing sessions set up to allow anyone within the business to attend an open session at a set time or location. The aim is to pair with another person together with a laptop and test the application. Generally prizes, cakes, and teacoffee are offered to allow a friendly and relaxed atmosphere where people can enjoy themselves while they test the application. The advantage of bug hunts is that it allows people from outside the development team to use the application while it is still being developed. It allows them to provide feedback to the team, but also, because this is likely to be the first time they have used the application, it provides a great opportunity for usability testing and identifying issues which might have been overlooked by the team who have lived and breathed the application for the past six months. It is also a useful opportunity to share knowledge amongst different parts of the business. They all use the application in a slightly different fashion and can provide real insight into the behavior of people and how they naturally use the application. Generally bugs are only a small part of the issues found as it can highlight areas of confusion, lack of documentation, or other issues not considered by the team.

Recording these bug hunt sessions allow the team to review how people interacted with the software, demonstrating pauses when they attempted to understand the UI and being able to easily reproduce bugs found with the software. When combined with a webcam, you can see the user's reaction to various parts of the system. This can be invaluable information.

Although you are performing exploratory testing there are various issues you can try in an attempt to break the application, which is discussed in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.250.11