Tests

Let’s focus now on the JUnit 5 tests of this application. We implement three types of tests: unit, integration, and end to end. As introduced before, for the unit test, we use Mockito to exercise the SUT in isolation. We decide to unit test the two major components of our application (CatService and CookiesServices) using Java classes containing different JUnit 5 tests.

Consider the first test (called RateCatsTest). As can be seen the code, in this class we are defining the class CatService as the SUT (using the annotation @InjectMocks) and the class CatRepository (which is used by CatService with dependency injection) as the DOC (using the annotation @Mock). The first test of this class (testCorrectRangeOfStars) is an example of parameterized JUnit 5 tests. The objective of this test if to assess the rate method inside CatService (method rateCate). In order to select the test data (input) for this test, we follow a black-box strategy and therefore we use the information of the requirements definition. Concretely, FR3 states the range of stars to be used in the rating mechanism for cats. Following a boundary analysis approach, we select the edges of the input range, that is, 0.5 and 5. The second test case (testCorrectRangeOfStars) also tests the same method (rateCat), but this time the test evaluates the SUT response when out-of-range inputs exercise the SUT (negative test scenario). Then, two more tests are implemented in this class, this time aimed to assess FR4 (that is, using also comments to rate cats). Notice that we are using the JUnit 5 @Tag annotation to identify each test with its corresponding requirement:

package io.github.bonigarcia.test.unit;

import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.text.IsEmptyString.isEmptyString;
import static org.junit.jupiter.api.Assertions.assertThrows;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.when;

import java.util.Optional;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import io.github.bonigarcia.Cat;
import io.github.bonigarcia.CatException;
import io.github.bonigarcia.CatRepository;
import io.github.bonigarcia.CatService;
import io.github.bonigarcia.mockito.MockitoExtension;

@ExtendWith(MockitoExtension.class)
@DisplayName("Unit tests (black-box): rating cats")
@Tag("unit")
class RateCatsTest {

@InjectMocks
CatService catService;

@Mock
CatRepository catRepository;

// Test data
Cat dummy = new Cat("dummy", "dummy.png");
int stars = 5;
String comment = "foo";

@ParameterizedTest(name = "Rating cat with {0} stars")
@ValueSource(doubles = { 0.5, 5 })
@DisplayName("Correct range of stars test")
@Tag("functional-requirement-3")
void testCorrectRangeOfStars(double stars) {
when(catRepository.save(dummy)).thenReturn(dummy);
Cat dummyCat = catService.rateCat(stars, dummy);
assertThat(dummyCat.getAverageRate(), equalTo(stars));
}

@ParameterizedTest(name = "Rating cat with {0} stars")
@ValueSource(ints = { 0, 6 })
@DisplayName("Incorrect range of stars test")
@Tag("functional-requirement-3")
void testIncorrectRangeOfStars(int stars) {
assertThrows(CatException.class, () -> {
catService.rateCat(stars, dummy);
});
}

@Test
@DisplayName("Rating cats with a comment")
@Tag("functional-requirement-4")
void testRatingWithComments() {
when(catRepository.findById(any(Long.class)))
.thenReturn(Optional.of(dummy));
Cat dummyCat = catService.rateCat(stars, comment, 0);
assertThat(catService.getOpinions(dummyCat).iterator().next()
.getComment(), equalTo(comment));
}

@Test
@DisplayName("Rating cats with empty comment")
@Tag("functional-requirement-4")
void testRatingWithEmptyComments() {
when(catRepository.findById(any(Long.class)))
.thenReturn(Optional.of(dummy));
Cat dummyCat = catService.rateCat(stars, dummy);
assertThat(catService.getOpinions(dummyCat).iterator().next()
.getComment(), isEmptyString());
}

}

Next, unit test evaluates the cookies service (FR5). To that aim, the following test use the class CookiesService as SUT, and this time we are going to mock the standard Java object, which manipulates the HTTP Cookies, that is, javax.servlet.http.HttpServletResponse. Inspecting the source code of this test class, we can see that the first test method (called testUpdateCookies) exercise the service method updateCookies, verifying whether or not the format of the cookies is as expected. Next two tests (testCheckCatInCookies and testCheckCatInEmptyCookies) evaluates the method isCatInCookies of the service using a positive strategy (that is the input cat corresponds with the format of the cookie) and a negative one (the opposite case). Finally, the last two tests (testUpdateOpinionsWithCookies and testUpdateOpinionsWithEmptyCookies) exercise the method updateOpinionsWithCookiesValue of the SUT following the same approach, that is, checking the response of the SUT using a valid and empty cookie. All these tests have been implemented following a white-box strategy, since its test data and logic relies completely in the specific internal logic of the SUT (in this case how the cookies are formatted and managed).

This test does not follow pure white-box approach in the sense of its objective is to exercise all the possible paths within the SUT. It can be seen as white-box in the sense of it has been designed directly linked to the implementation rather than the requirements.
package io.github.bonigarcia.test.unit;

import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.CoreMatchers.not;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.collection.IsEmptyCollection.empty;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.Mockito.doNothing;
import java.util.List;
import javax.servlet.http.Cookie;
import javax.servlet.http.HttpServletResponse;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import io.github.bonigarcia.Cat;
import io.github.bonigarcia.CookiesService;
import io.github.bonigarcia.Opinion;
import io.github.bonigarcia.mockito.MockitoExtension;

@ExtendWith(MockitoExtension.class)
@DisplayName("Unit tests (white-box): handling cookies")
@Tag("unit")
@Tag("functional-requirement-5")
class CookiesTest {
@InjectMocks
CookiesService cookiesService;
@Mock
HttpServletResponse response;

// Test data
Cat dummy = new Cat("dummy", "dummy.png");
String dummyCookie = "0#0.0#_";

@Test
@DisplayName("Update cookies test")
void testUpdateCookies() {
doNothing().when(response).addCookie(any(Cookie.class));
String cookies = cookiesService.updateCookies("", 0L, 0D, "",
response);
assertThat(cookies,
containsString(CookiesService.VALUE_SEPARATOR));
assertThat(cookies,
containsString(Cookies.CAT_SEPARATOR));
}

@Test
@DisplayName("Check cat in cookies")
void testCheckCatInCookies() {
boolean catInCookies = cookiesService.isCatInCookies(dummy,
dummyCookie);
assertThat(catInCookies, equalTo(true));
}

@DisplayName("Check cat in empty cookies")
@Test
void testCheckCatInEmptyCookies() {
boolean catInCookies = cookiesService.isCatInCookies(dummy, "");
assertThat(catInCookies, equalTo(false));
}

@DisplayName("Update opinions with cookies")
@Test
void testUpdateOpinionsWithCookies() {
List<Opinion> opinions = cookiesService
.updateOpinionsWithCookiesValue(dummy, dummyCookie);
assertThat(opinions, not(empty()));
}

@DisplayName("Update opinions with empty cookies")
@Test
void testUpdateOpinionsWithEmptyCookies() {
List<Opinion> opinions = cookiesService
.updateOpinionsWithCookiesValue(dummy, "");
assertThat(opinions, empty());
}

}

Let’s move on to the next type of tests: integration. For this type of test, we are going to use the in-container test capabilities provided by Spring. Concretely, we use the Spring test object MockMvc to evaluate the HTTP responses of our application from the client-side. In each test, different requests are exercised verifying if the responses (status code and content type) are as expected:

package io.github.bonigarcia.test.integration;

import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;

import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import org.springframework.test.web.servlet.MockMvc;

@ExtendWith(SpringExtension.class)
@SpringBootTest
@DisplayName("Integration tests: HTTP reponses")
@Tag("integration")
@Tag("functional-requirement-1")
@Tag("functional-requirement-2")

class WebContextTest {

@Autowired
MockMvc mockMvc;

@Test
@DisplayName("Check home page (GET /)")
void testHomePage() throws Exception {
mockMvc.perform(get("/")).andExpect(status().isOk())
.andExpect(content().contentType("text/html;charset=UTF-8"));
}

@Test
@DisplayName("Check rate cat (POST /)")
void testRatePage() throws Exception {
mockMvc.perform(post("/").param("catId", "1").param("stars", "1")
.param("comment", "")).andExpect(status().isOk())
.andExpect(content().contentType("text/html;charset=UTF-8"));
}

@Test
@DisplayName("Check rate cat (POST /) of an non-existing cat")
void testRatePageCatNotAvailable() throws Exception {
mockMvc.perform(post("/").param("catId", "0").param("stars", "1")
.param("comment", "")).andExpect(status().isOk())
.andExpect(content().contentType("text/html;charset=UTF-8"));
}

@Test
@DisplayName("Check rate cat (POST /) with bad parameters")
void testRatePageNoParameters() throws Exception {
mockMvc.perform(post("/")).andExpect(status().isBadRequest());
}

}

Finally, we also implement several end-to-end tests using Selenium WebDriver. Inspecting the implementation of this test, we can see that this test is using two JUnit 5 extensions at the same time: SpringExtension (to start/stop the Spring context within the JUnit 5 tests’ lifecycle) and SeleniumExtension (to inject WebDriver objects aimed to control web browsers in the test methods). In particular, we use three different browsers in one of the tests:

  • PhantomJS (headless browser), to assess is the list of cats is properly rendered in the web GUI (FR1).
  • Chrome, to rate cats using through the application GUI (FR2).
  • Firefox, to rate cats using the GUI but getting an error as a result (FR2).
package io.github.bonigarcia.test.e2e;

import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.openqa.selenium.support.ui.ExpectedConditions.elementToBeClickable;
import static org.springframework.boot.test.context.SpringBootTest.WebEnvironment.RANDOM_PORT;

import
java.util.List;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.phantomjs.PhantomJSDriver;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.web.server.LocalServerPort;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import io.github.bonigarcia.SeleniumExtension;

@ExtendWith({ SpringExtension.class, SeleniumExtension.class })
@SpringBootTest(webEnvironment = RANDOM_PORT)
@DisplayName("E2E tests: user interface")
@Tag("e2e")
public class UserInferfaceTest {
@LocalServerPort
int serverPort;

@Test
@DisplayName("List cats in the GUI")
@Tag("functional-requirement-1")
public void testListCats(PhantomJSDriver driver) {
driver.get("http://localhost:" + serverPort);
List<WebElement> catLinks = driver
.findElements(By.className("lightbox"));
assertThat(catLinks.size(), equalTo(9));
}

@Test
@DisplayName("Rate a cat using the GUI")
@Tag("functional-requirement-2")
public void testRateCat(ChromeDriver driver) {
driver.get("http://localhost:" + serverPort);
driver.findElement(By.id("Baby")).click();
String fourStarsSelector = "#form1 span:nth-child(4)";
new WebDriverWait(driver, 10)
.until(elementToBeClickable
(By.cssSelector(fourStarsSelector)));
driver.findElement(By.cssSelector(fourStarsSelector)).click();
driver.findElement(By.xpath("//*[@id="comment"]"))
.sendKeys("Very nice cat");
driver.findElement(By.cssSelector("#form1 > button")).click();
WebElement sucessDiv = driver
.findElement(By.cssSelector("#success > div"));
assertThat(sucessDiv.getText(), containsString("Your vote for
Baby"));
}

@Test
@DisplayName("Rate a cat using the GUI with error")
@Tag("functional-requirement-2")
public void testRateCatWithError(FirefoxDriver driver) {
driver.get("http://localhost:" + serverPort);
driver.findElement(By.id("Baby")).click();
String sendButtonSelector = "#form1 > button";
new WebDriverWait(driver, 10).until(
elementToBeClickable(By.cssSelector(sendButtonSelector)));
driver.findElement(By.cssSelector(sendButtonSelector)).click();
WebElement sucessDiv = driver
.findElement(By.cssSelector("#error > div"));
assertThat(sucessDiv.getText(), containsString(
"You need to select some stars for rating each cat"));
}

}

In order to make easier the traceability of the test executions, in all the implemented test, we have selected meaningful test names using @DisplayName. In addition, for parameterized tests, we use the element name to refine the test name of each execution of the test, depending on the test input. The following screenshot of the execution of the test suite in Eclipse 4.7 (Oxygen):

Execution of the test suite for the application Rate my cat! in Eclipse 4.7

As introduced before, we use Travis CI as build server to execute our tests during the development process. In the configuration of Travis CI (file .travis.yml), we setup two additional tools to enhance the development and test process of our application. On the one hand, Codecov provides a comprehensive test coverage report. On the other hand, SonarCloud provides a complete static analysis. Both tools are triggered by Travis CI as part of the continuous integration build process. As a result, we can evaluate both the coverage test and the internal code quality of our application (such as code smells, duplicated blocks, or technical debt) along with our development process.

The following picture shows a screenshot of the online report provided by Codecov (the report provided by SonarCloud was presented in the previous section of this chapter):

Codecov report for the application Rate my cat!

Last but not least, we are using several badges in the README of our GitHub repository. Concretely, we add badges for Travis CI (status of the last build process), SonarCloud (status of the last analysis), and Codecov (percentage of the last code coverage analysis):

GitHub badges for the application Rate my cat!
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.85.181