Continuous Integration

The continuous integration process starts when a developer checks in code or merges code from a feature branch into the main branch. This triggers a series of automated tests, most commonly unit tests, to validate the quality of the code being checked in.


Image Improving Quality Through Pull Request Validation

In distributed version control systems like Git, a request to have a code change accepted into the mainline branch is called a pull request (PR). Common PR submission errors or check-in rules can be first validated using an automated analysis tool or “bot” that is triggered during a check-in. One example of a bot is rultor (https://github.com/rultor): a bot that, once triggered, will create a Docker container, merge and pull the request into the master branch, run the predefined range of tests, and at the end of a successful run will close the request. Post-checkup, rultor can be used to start a new Docker container and perform the deployment of the existing product to the production endpoint.

Another example is CLAbot (https://github.com/clabot), a bot that will check a pull request that is submitted in a repository to see if the PR author has already signed a Contributor License Agreement (CLA). The CLAbot is so useful that the Azure team uses it to automate the work of validating licensing for contributions to Azure GitHub repositories (https://github.com/azurecla).


Continuous integration defines a build workflow that includes all the steps required to build and run tests for your application. A typical build workflow could contain steps like the following:

Download source code and any package dependencies like Maven for Java, NPM for Node, or NuGet for .NET applications.

Build the code using build tools like Ant, Gradle, or MSBuild.

Run a set of tasks using Javascript task runners like Grunt or Gulp to optimize images, or bundle and minify JavaScript and CSS.

Run unit tests using tools like Junit for Java, Mocha for Node, or xUnit for .NET applications.

Run static analysis tools like SonarQube to analyze your source and code coverage reports, or run specialized tools like PageSpeed for web performance.

If the tests were successful, push the new image into your Docker registry.

Now that we’ve discussed what a CI workflow might look like, let’s discuss some of the testing and analysis tools mentioned previously.

Unit Testing

A unit test is designed to test code on a functional level. Let’s take an example of a simple Add() method that takes two numbers and returns the sum. A unit test could run a number of tests to ensure the method worked (1 + 1 = 2), and didn’t work (0 + “cat” throws an exception) as expected. One of the main premises of unit testing is code isolation, where a function can be tested independently of any other moving parts. Unit tests help with regression testing, where a code change didn’t inadvertently break the expected behavior of an existing test.

Testing Service Dependencies with Consumer-Driven Contract Testing

One of the drawbacks of microservices is that the complexity of running multiples services, each with its own set of dependencies, makes it difficult to test. This can get even more complicated as each service evolves at a different pace from the rest of the system, or worse yet, a potential dependency on an older version of a component that might no longer be available, or whose behavior has changed in a way that could introduce subtle bugs. Going further, a microservice can both be a consumer of, and a provider to, other services. This means you have to clearly articulate microservice dependencies.

One of the ways to define dependencies across microservices is to create consumer-driven contracts. As we discussed previously, a good microservice architecture should account for independent evolution of interconnected services. To ensure that this is the case, each one shares a contract rather than a bounded type. A consumer contract is a way to set a list of expectations from the service provider (the microservice) that need to be fulfilled to be successfully used. It’s important to note that these contracts are implementation-agnostic; they just enforce the policy that if the service can be consumed by clients, the integration tests pass. A sample provider contract for the Product Catalog service is shown in Figure 6.6, where both the FrontEnd and Recommendation services need product information from the Product Catalog service.

Image

FIGURE 6.6: The provider contract includes two consumer contracts and the full Product Catalog

Ian Robinson outlined three core characteristics for provider contracts in his “Consumer Driven Contracts” overview, from http://martinfowler.com/articles/consumerDrivenContracts.html:

Closed and complete: A provider contract has to represent the complete set of functionality offered by the service.

Singular and authoritative: Provider contracts cover all system capabilities that can be exposed to the client, meaning the contract is the source of truth for what the provider service can do.

Bounded stability and immutability: A provider contract is stable and won’t change for a bounded period and/or locale.

In Figure 6.7, both consumer microservices are connecting to the same Product Catalog provider, but with slightly different expectations as to what data they will be handling. While the provider contract exposes the full schema for a provider, the FrontEnd and Recommendation consumers have a different set of required data fields (expectations). The presence of additional fields in the full Provider Catalog should not impact any of the consumers as their only dependency should be on their required fields.

Image

FIGURE 6.7: The FrontEnd and Recommendation consumers each have different data expectations from the Product Catalog provider

In terms of integration testing, client contracts should be explicitly expressed through a range of automated tests that will ensure that no breaking changes are inadvertently introduced into the system. To validate these sets of contracts, a common and flexible approach is to rely on mock objects which “mock” the behavior of a real object or resource, like a microservices REST API. Teams can create mocks that will simulate expected behaviors of services to ensure the service works as expected. Testing your service using mock objects for every check-in ensures that any breaking issues are caught early.

Pact is a consumer-driven contract-testing tool with mocking support that is available for a number of programming languages including Java, .NET, JavaScript, Ruby, and Python, available at https://github.com/realestate-com-au/pact. Pact also includes Pact Broker, a repository for sharing consumer-driven contracts that can even be run as a Docker image, available at https://hub.docker.com/r/dius/pact_broker/.


Image Public and Third-Party Services

Consumer-driven contracts help ensure that known service dependencies across microservices are well defined and tested both by the provider and consumer. The harder integration testing happens when you are providing a public API or consuming a public third-party API that you don’t control. For those scenarios, any changes to an API can break dependent services.


Code Analysis with SonarQube

As code for your service will be added by a number of developers, it is often necessary to ensure that developers follow certain style or quality standards established within your organization. SonarQube is an open-source platform design to continuously monitor code quality including code duplication, language styles, code coverage, documentation comment coverage and more. SonarQube supports a number of programming languages including Java, JavaScript, C#, PHP, Objective-C and C++.

SonarQube integrates well with a variety of CI tools, such as Bamboo, Jenkins, and Visual Studio Team Services, exposing a web frontend where any engineer can quickly assess the status of the codebase.

Web site performance

Some microservices aren’t REST services per se, but rather web sites. For web code, performance is a key metric not only for search engine optimization, but as an Aberdeen group study shows, even a 1-second delay in performance can cause a seven percent drop in conversions. Treat performance like a feature, and ensure that you are measuring your required performance targets. To do this, you can use a number of prebuilt Grunt or Gulp tasks that automate the measurement of your web site performance. For example, you can use the phantomas Grunt task (http://bit.ly/grunt-phantomas), which is a configurable web performance metrics collector with over 100 different built-in performance metrics including image size, caching, browser performance, and more. Performance results are then output to a shared directory, as either a comma-separated values (CSV) file, or using JSON. Another useful tool is the PageSpeed task, which uses Google’s Page Speed Insights API to test your site performance for both mobile and desktop, as shown in Figure 6.8. Each performance recommendation includes a link for more information to fix the issue. By integrating speed measurement tools into your CI process, you’ll ensure you catch performance degradation bugs with every check-in.

Image

FIGURE 6.8: A sample set of recommendations from Google’s PageSpeed Insights web site

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.29.105