Appendix C. Software Testing Tips

During my four-and-a-half years as a software design engineer in test on the Visual Studio team, I picked up a great many things beyond just the ins and outs of Visual Studio. When I first started blogging about software testing, I was pleasantly surprised at how many people found it interesting. They let me know by linking to my blog, leaving comments on my blog posts, and sending their interview questions for me to answer. Well, they sent me their questions, but I didn’t answer them. =D

Five Tips for Surviving as a Tester

Following are some of the lessons I learned from testing the Visual Studio IDE. Whether you are a professional tester or you just do software testing occasionally, you may find these tips to be helpful.

Tip 1: Never assume anything

I frequently tell two stories about how I learned this tip the hard way.

Story number one begins during my first six months at Microsoft, during which time I was assigned my first project where I was the only tester. We had just deployed a Web site internally within the Microsoft corporate network as an alpha release to collect internal feedback. Not even 20 minutes went by before the developer sent an e-mail message saying that the site was throwing errors in the most common scenarios. That was the first time I experienced that tester "state of shock," where you thought you had tested it all but something obvious apparently slipped by you.

Based on the call stack, it didn’t take the developer long to figure out that one of the check-box labels was causing the error. This check box was the only one of all the check boxes that contained a forward slash (/) or any special character. Both the program manager and the developer looked at me and said, "Did you test this?" Let’s just say those are words that no tester ever wants to hear. I took a deep breath and admitted the truth, which was that I had assumed that since the first row of check boxes worked, the rest of them would work too. In other words, I had no idea whether it had been a last-minute regression or whether the bug had existed all along.

The program manager picked up the phone and called the server administrator, asking if he would do a favor for us and start the server. If I recall correctly, the developer had an idea for a quick fix that would require "only" a start rather than a full deployment. The server started, the site came back up, and the mainline scenario was still broken. The developer then thought of another idea, but the program manager said that he couldn’t ask for another favor.

The lesson I learned from story number one is to never assume that any basic functionality will just work. Story number two occurred during the latter half of my software testing career, while I was on the editor team. We were in the midst of a full test pass for the Visual Studio 2005 Beta 2 release, where every known test case must be run and every possible scenario must be tried in order to break the software. I recall some feature areas having so many test cases that it took three weeks to run them all, and most of them were automated!

A full test pass is usually a tester’s last chance to have bugs of medium and even low priority fixed before the release, but it is specifically designed to find the high-priority bugs first. Because many of these tests are automated, we had a lot of analysis to do in the lab, such as figuring out why the test case failed. Was it a product bug? Was it a test-case logic issue? Was it just a UI-timing issue, where the UI experiences a delay in showing a window but the automation framework attempts to run the command a little too soon, hence failing the test? While analyzing test-case failures for some of the primary scenarios, I couldn’t help think about how many test cases I had to run and analyze. In other words, I wanted to analyze failures as fast as possible.

I will never forget this one in the lab run, a term we use to describe running specific test cases against a specific machine configuration or edition of Visual Studio. This run was testing the Visual Studio Standard edition. All the Emacs and Brief emulation test cases, which verify alternate editing functionality and keyboard shortcuts, failed.

A quick investigation showed that the required files were not found on the computer that the run was conducted on. I assumed that the missing files obviously had something to do with the run itself and not with the actual product. I based this assumption on the fact that the Professional edition, among others, had passed. Also, it wasn’t uncommon to see an issue arise with a run configuration, although these issues are more closely investigated than what I was doing.

I analyzed the failures resulting from a machine configuration—that is, failures that happened because the files were not present on the machine. And my analysis came back to haunt me.

Several weeks later, as we were about to ship the beta release, the lead developer for the editor discovered that the editor emulation feature was not available on the Standard edition for the Visual Studio 2005 Beta 2 release. Once again, I experienced that "tester shock," and I felt it probably just as bad as I had in the previous story, as in "Wow, I can’t believe I let this one get by me!" I realized that the only words worse to a tester’s ears than "Did you test this?" are "Why did you assume this was not a bug?"

The lesson I learned from story number two is, When in doubt, get a second opinion whether something is a bug.

Here are a few other tips related to never assuming anything:

  • Never assume that someone else is covering a particular scenario. Test everything that comes to mind, and then test some more. A little overlap never hurts.

  • Never assume the people reading your bug will "just get" the bug. Always be as explicit as possible with your steps for reproducing the issue, even when the steps are completely obvious to you. This holds true especially when you attach a picture to your bug. A picture speaks a thousand words, so make sure that the person reading your bug report hears the right 20 to 30 words.

  • Never assume that a simple scenario could not be broken. Always, always, always test, even if it is the most trivial example for the most trivial feature you’ve ever seen. Don’t take the chance of having to hear the words, "Did you test this?"

Tip 2: Learn from the bugs you missed

As I became more confident in my software testing skills, I started actively monitoring the bugs filed against my feature area by other people. I became intrigued by pursuing the idea of "What else am I missing?" There were so many days where I stared at my monitor just wondering what I hadn’t tried yet. Looking at these other bug reports helped me identify where I had holes in my software testing style, what strategies for breaking software I had yet to learn, what categories of bugs I hadn’t seen yet, and so forth. I believe a lot of software testing skill comes from pure experience.

Ask yourself why someone filed a bug report in your feature area that was later fixed. What could you learn from this bug? Are you missing other similar tests? This process could be considered a root-case analysis. But instead of looking at it from a developer’s point of view, asking why this bug was introduced, do it from the tester’s point of view, asking what you didn’t do to catch the bug. You may find it to be a rewarding exercise to try at least once.

Tip 3: Help your developer however possible

I joke a lot about my "You broke it, and I’m telling" attitude about software testing, but in all seriousness, I do not want my developer to feel this way at all. I think communication is the key to producing high-quality software. And I think the more you can do as a software tester to help out your developer, the better you’ll communicate and the happier your customers will be with your product.

First, establish trust with your developer. If you say you’re going to test something by a given time, get it done. And if the developer needs something, help him or her out as soon as possible. Also, actively seek feedback from your developer on what features are lacking testing, how you can do more testing, and so forth. As is the case for Tip 2: Provide an image with each tip, you’ll be surprised how many new scenarios you’ll come up with just by asking your developer for ideas.

It won’t take long to see the benefits of going the distance to help your developer.

Tip 4: Leave appropriate comments when closing bugs

One year, six months, or even just three months (or, for me, three days) from now, you won’t remember how you verified that bug fix. If the original steps for reproducing the bug are no longer the same, make sure you leave a comment explaining exactly what you did to verify the fix. Sometimes bugs morph into other bugs, thus the outcome of the bug doesn’t match the original description or even the title!

And don’t forget to include the build number when you verify the fix. This will help greatly to identify when regressions are introduced.

Tip 5: Don’t just get it in writing

E-mail messages aren’t enough. If you’re not going to cover a specific scenario, get it in writing in your test plan. Make sure any discussions about bugs, whether they are hallway conversations or conversations via e-mail, are also captured in the bug itself. Unlike e-mail messages that can be quickly discarded and deleted, bug histories tend to stick around much longer. This is especially important if you leave the project, as the person taking over may not know about these conversations or decisions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.62.168