CHAPTER 
15

Specifications and Testing

Having good User Stories, acceptance criteria, and functional tests are all critical to being successful in Agile software development. The value of good acceptance criteria and using BDD to drive your design is something that Agile teams need to understand. The importance of investing in automated functional tests is also something that needs to be understood and invested in. Often teams waste a lot of time on writing documents in multiple places and keeping them synchronized. Quickly these get outdated and very rarely reflect what the software is actually doing. I think a better approach is to have what Gojko Adzic calls “living documentation.”

Real-Life Stories

Story 1: Missing Good Acceptance Criteria

One of the most critical parts of a User Story is the acceptance criteria. It is the way the developer, Product Owner, and QA ensure they are on the same page. It is the place to think about all the scenarios to cover how the functionality will work. Only when all the acceptance criteria can be demonstrated to the Product Owner, and the Product Owner accepts the work, can the User Story be closed. So having people who can write good acceptance criteria is important. I was on one Scrum team that wrote acceptance criteria using the BDD given-when-then format. We would then take the acceptance criteria and use them to create JBehave story files. Over time we realized this was not a good practice, but I will talk about that in Story 2. Rather, the issue I saw on this team was the fact that the SDET on the team was writing the acceptance criteria. This seemed odd to me from the start and when I asked why it was being done this way, nobody seemed to have a good answer. The logic was that the SDET is doing the testing, so why not have him write the acceptance criteria as well? This did not work well on this team or other teams that I have seen. I think the reason is that being a good SDET or QA engineer is not the same as knowing the business domain and being able to think through all the scenarios for a given User Story. The acceptance criteria were not very clear and things were missed. This hurt both the quality of the software and the team’s velocity.

Thoughts

It is not that a SDET or automation engineer cannot write good acceptance criteria, but rather I think it is just a different skill set. On teams where I’ve seen really thorough and well-written acceptance criteria, they were written by either a QA engineer or BA. They have a much better understanding, at least at the time the acceptance criteria are written, of what the User Story is about. From my experience, they are better suited to think through all the scenarios and articulate those in the acceptance criteria. The other point is that acceptance criteria should be a collaborative effort, not just a person handing off the acceptance criteria to a developer.

From my perspective, the order of testing scenarios goes something like this: acceptance criteria scenarios are first derived from business rules, and then automated tests are derived from the acceptance criteria scenarios. In terms of the format of the acceptance criteria, it does not have to be in the given-when-then format. In fact, I am a fan of using an example table, which I think can more clearly illustrate what the expected behavior should be. There are a lot of examples you can find on this approach, but I would recommend having a look at Specification by Example by Gojko Adzic (Manning Books, 2011).

Story 2: Great Collaboration

In contrast to the situation in Story 1, I was fortunate to be on an Agile project where the process for writing User Stories worked very well. On this particular project we had a process where the QA engineer would write the acceptance criteria for the User Stories. The QA engineer worked very closely with the Product Owner to get answers and to make sure they understood all the scenarios for the User Story.

When a developer was ready to start working on a User Story, he or she would meet with the QA engineer to discuss the story. They would review the typical “as a blank, I want to be able to blank, so that I can blank,” and then they would review all the acceptance criteria. There would be some back and forth about whether the scenarios were clear enough or maybe some scenarios were not covered. The result of this meeting, which could be as short as ten minutes, is that the developer and the QA engineer were on the same page about the User Story. This does not mean things can’t change, but at least we were on the same page before development started and we worked together to obtain a shared understanding of the work that needed to be done.

Once everyone was on the same page in terms of what it meant to be “done done” for the User Story, on this particular team the developers would take the acceptance criteria and convert them into automated functional tests using JBehave. When we gave the story to our QA engineer for testing, one of the things he would check is that the automated functional tests were written and matched what the acceptance criteria said. The QA engineer also ensured that the tests were passing in the CI pipeline.

Thoughts

In Agile software development, collaboration is critical, though I am a fan of process, because it helps with consistency and it helps with building a cadence. But to be clear, in Story 2 the team did not have hard and fast rules about when the developer would meet with the QA engineer and no formal meetings. Everyone on the team worked well together, and I could just walk over to the desk of one of the QA engineers and say, “Do you have a minute to discuss the User Story about X?” Of course, the QA engineer would not be completely surprised because in the Daily Stand-up I would have mentioned that chances were good that I would be starting on the next User Story in the backlog. Even if it was an unplanned meeting, it was never a big deal.

In the case where, perhaps, the scenarios were not complete, the developer would offer to help make the changes. This allowed the developer to help unblock the User Story and hence development could begin. This was probably one of the best examples of teamwork that I had ever seen on a project.

Story 3: Quality of Test Code

While I was a member on one Scrum team I saw how managers who are not technical can make decisions that really hurt code quality. On this particular Agile project we were using JBehave to write our BDD tests. JBehave is a BDD framework written in Java. The manager of my team thought that any developer could write the automated functional tests using JBehave, so our web developers started writing the automated functional tests. It is not that this is necessarily a bad thing, except in this instance our web developers did not have experience writing Java code. It was great that the web developers were getting a chance to learn Java. However, our team said that we wanted to treat our automated functional tests as production-quality code. So in that sense I didn’t think it was a good idea to have people brand new to Java writing code that should be production-quality code. By the end of the project we had a significant amount of Java code that was not that well written and had to be refactored.

Thoughts

In Story 3, the main issue was that the manager was so determined to prove his theory that all developers have the same abilities that he was willing to hurt the team’s velocity and put the quality of the software at risk. I even discussed this with the manager at one point, saying that all developers don’t have the same skill sets and that expecting someone new to writing Java code to produce the same quality of code as someone who has been writing Java code for years was not reasonable.

The main point is that I think test code, both unit and functional, should be the same quality as production code. No less emphasis should be placed on this code. Only when you have quality test code can you have confidence in your application and only then can you really think about CD. If you do not have high confidence that you have not broken anything in the application, then you would need to rely on a lot of manual testing, and that is just not feasible for most applications.

Story 4: Too Many Sources of Truth

One thing that plagues many teams is how to document requirements and how to make sure that they reflect what the code actually does. I’ve seen this on a lot of Agile projects. On one project we had documentation of requirements, functional tests, and the actual code for the tests all in different places. We kept the acceptance criteria on wiki pages. We had the definition of the functional tests in text files. Finally, we had the code for the functional tests in source control. Not surprisingly, these became quickly out of sync. Our team could not trust the wiki pages or the text files. The only way to truly know how the application was working was to look at the actual code for the functional tests (in this case it was Java).

By the end of this project we had hundreds of User Stories on the wiki that people could not trust, and they were essentially thrown away. So, instead of updating the existing User Story and its acceptance criteria whenever we added new functionality, we created brand-new User Stories and acceptance criteria. It was like starting from scratch for each new project. Then, at the end of the next project, we again could not trust the wiki pages and they again became outdated. This was such a terrible waste of time and effort, and at the end we had nothing, except the code for the functional tests, that actually told us what the application was doing.

Thoughts

A better approach, in my opinion, is to have “living documentation” that evolves over time and gives you the confidence that the application is behaving the way you think it should. So, how do we go about getting living documentation? What does it really mean? A good place to start is by looking at some of the articles by Gojko Adzic. There are other great articles out there as well.

Essentially, the goal is to have documentation that is a living, breathing thing that evolves over time. It should be something that can be executed every time code is checked in to source control. In other words, as part of CI you would run these tests and if they passed, you would know that the application was behaving as expected. There are several approaches to accomplish this goal. You can use things like Cucumber or JBehave. Basically, you are putting the acceptance criteria from the User Story into these frameworks and running them on every commit of code. When you add new features or the behavior of the application is changed, you update these files so that the new version is run when developers check in code. In this way, the documentation is truly “living” and changes over time.

This living documentation becomes the “source of truth” and is the one place everyone can look to understand how the application works. For example, in an e-commerce application, I could look at the story file (if I were using something like JBehave) for adding items to a cart to understand how this works in the application. If this set of acceptance criteria (i.e., the story file in JBehave) is passing in CI, then I know the application is doing what I expect. Better yet, anyone can look at these acceptance criteria and can understand what the system is doing. In other words, the acceptance criteria are in the BDD language using the given-when-then structure. This means Product Owners, QA engineers, and developers are all looking in one place to understand how we expect the application to work. It means no more having multiple sources of truth (i.e., wiki pages, text files in source control, and maybe even some spreadsheets somewhere).

The framework you use, how you integrate this approach into your current CI pipeline, and how you fit this into your overall Agile development will obviously vary from team to team. But I think the benefits of using living documentation are worth at least trying to see if it will work for your team and organization. I think the time saved, reduction of redundant documentation, single source of truth, and increased confidence that what is coming out of your CI pipeline actually works the way you think it should are all reasons for considering using some sort of living documentation.

Story 5: One Size Does Not Fit All

When it comes to testing software, each company seems to do things a little differently. In some companies you have a central group of people who do manual testing. In other companies you have QA engineers as part of a Scrum team. In other companies you have business users who test the software. Regardless of whether or not a company is using Agile software development, I’ve seen that what works for one company does not necessarily work for another company. I’ve seen this in several companies that try to adopt what the “cool kids” are doing. The best example of this occurred when a QA director introduced many concepts that he read about that were working for companies that are “software giants.”

This director introduced two changes: a centralized QA group and adding a SDET to each Scrum team. Previous to these changes, we had a QA engineer on each cross-functional Scrum team. The main problem was that there was a lack of understanding of how the existing Scrum teams in this organization functioned. The changes were introduced completely based on what was working for other companies. The problem was that this particular company did not have the same culture, infrastructure, or way of working as these “software giants.” The changes were introduced without understanding how Agile was working in this company and the importance of having a cross-functional Scrum team.

When these changes were implemented, the embedded QA engineers were removed from the Scrum teams and a SDET was added to the team. But the SDET role in this company was very different from what it was at other companies. These changes led to a less knowledgeable central QA group because now the group was not involved in the development of the features. Testing became slower, many areas were not covered, and defects were not detected. In general, the quality of the application suffered. This was due in part to the lack of automated tests. It also created a classic “throw it over the wall” mentality. So once teams finished their Sprints and collected their points, they were really not concerned about what testing would happen later.

Thoughts

I’m definitely an advocate of trying new things, as I’ve mentioned in other chapters in this book. But there are some critical components that need to be in place. For example, there needs to be a culture of openness so people can speak up when something is not working. Then, of course, these concerns need to be heard and acted on by people who have the power to change things.

Following are some of the things to look out for and how to fix things:

First, look at your culture. Just because some process works at some other company does not mean it will work at yours. Each company, and even organization, has its own culture. Will introducing something like centralized QA work in yours?

Second, don’t set things in stone. One of the issues with the situation in Story 5 is that it was approached as a mandate and it was absolute.

Third, listen to your teams. When everyone from managers to developers starts complaining that things have been made harder and that the software quality is impacted, those concerns should probably be heard.

Finally, let the numbers tell the story. If after a big change you see the defect counts go up, testing takes twice the amount of time, or releases are always late, then you need to take a close look to see if problems these are a result of the change. If they are a result of the change, then be open to rolling back the changes or adjusting the areas that are harming software development.

Story 6: Worth the Investment

In my opinion, automated functional testing is one of the best investments a team can make. It can help catch defects before they get to Production and give you confidence that the application is still working the way you expect it to. But you need to invest in the right tests and environments to run the tests. I was on one team that was adopting Agile and the team made a good start by creating a few automated test scripts. The team was using Selenium Grid to run the tests. We had a small test suite for this particular e-commerce web site. For every release we would run the tests to make sure that nothing was broken. This was a great start since previously we had had no automated tests. In this particular case the test scripts were run against the web site once it was pushed to the Production environment (i.e., was live to anyone who hit the web site). The test scripts, since they were running in Production, did not go through the checkout process because that would have created real orders; creating real orders would decrement inventory for real customers, so that would not have been good).

Thoughts

If you don’t currently have automated test scripts, then you are missing out on an opportunity to increase quality and reduce manual testing. So adding any tests, even if they point to a Production version of your web site, is better than nothing. But, as mentioned in Story 6, that should just be a starting point.

Ideally, you run the automated functional tests before you deploy the code to a Production environment. Perhaps this is not an option in your particular case, so running against Production might be something you just have to live with and you might be limited in what you can test in an automated fashion. But if you can create a test environment (usually a Stage or Preview environment) and run the tests against that environment, then that is preferred.

The other issue in Story 6, again because the team was limited to running against Production, was that the automated functional tests did not test one of the most critical parts of the web site: the checking out of the cart. In this particular case I asked the QA engineer if he was testing that part of the web site manually, and he said “no.” I was a bit surprised and the reason he gave was that no one wanted to use a personal credit card. I recommended to our manager we get a corporate card that the QA engineer could use to test the checkout process every time we pushed a release to Production. Again, this is not ideal by any means, but if you can automate testing 75% of your web site and then manually test the remaining areas (especially the critical areas like checking out a cart), then that is a great improvement over not having any automated testing.

One technique I’ve used with good success is the concept of developing “User Journeys.” Basically, you create a set of personas and then walk through how that persona would interact with your web site. After you have these User Journeys, you can automate parts of them and build on that automation over time. Of course, the User Journeys themselves will evolve over time. So, for example, if you had an Online Travel site, one persona might be “Susan, a mother of three planning vacation for her family.” Then you develop a User Journey for this persona, like “Susan wants to explore hotels around the Orlando area” and “She wants to be able to sort by price” and “Once she finds a hotel she will add it to her cart and check out.” This allows you to understand, at a high level, how real users might interact with your web site. Once you have some User Journeys, you can automate the most critical parts, like adding items to a cart or checking out. There are many great articles on User Journeys, and I would recommend doing more research if you are interested in learning a different way to look at how to test your web site.

The idea of automated test scripts is not specific to Agile by any means. However, I think when adopting Agile software development methodology, automated functional testing becomes even more important because of things like CD and BDD. In order to release software often, it is critical that you have a high level of testing, and typically this testing needs to be automated because manual testing simply takes too long and does not catch everything.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.186.153