Integration tests

Integration tests are used to test whether a group of components works together correctly. These tests are used for two purposes:

  • Increasing the test coverage for those parts of an application that are not covered by unit tests—for example, classes that interact with other systems
  • Addressing risks that are not addressed in unit tests and deal with classes interacting

It can be hard to understand what integration risks are since it might seem clear that the whole will work as expected, as soon as all parts are working as expected. To understand this risk better, imagine that two components working together are responsible for climate control. One is written measuring the temperature in degrees Celsius and the other is acting on that temperature, expecting its input in degrees Fahrenheit. It will quickly become clear that, while both components are working as intended, exchanging numbers and taking action based on those numbers, the combination will not produce the desired outcomes.

Integration tests, especially those that interact with other systems, will not only take longer to run than unit tests but often require more setup or configuration to run as well. This may even include secrets such as usernames, passwords, or certificates. To handle configuration such as this, a settings file can be created next to the tests from which settings are loaded before the tests are executed. Every developer can then create their own copy of that file and run the tests using their own configuration.

Continuing the example from the previous section, let's assume that the MessageSender class that implements the IMessageSender interface needs a connection string to do its work. A test class for MessageSender might then look as follows:

[TestFixture]
public class MessageSenderTest
{
private MessageSender _messageSender;

[SetUp]
public void SetUp()
{
var connectionString = TestContext.Parameters["MessageSenderConnectionString"];
_messageSender = new MessageSender(connectionString);
}
}

connectionString needed for constructing the MessageSender class is received from the Parameters object on TestContext. This is the NUnit approach for making settings from a .runsettings file available. The exact approach can vary per test framework. An example .runsettings file would look as follows:

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<TestRunParameters>
<Parameter name="MessageSenderConnectionString" value="secret-value" />
</TestRunParameters>
</RunSettings>

Moving the settings out to a separate file ensures that secrets are not checked into source control. In the Executing tests in a pipeline section, you will learn how to build a .runsettings file for running tests in a pipeline.

This is because integration tests should also be part of the continuous integration build if possible. However, there is a risk that this will make a continuous integration build too slow. To counter this, one of the following solutions can be implemented:

  • Integration tests are executed in a separate build that is triggered in parallel to the continuous integration build. This way, the duration of the continuous integration build stays low while the integration tests are still continuously executed, and developers get fast feedback on their work.
  • Integration tests are executed later in the pipeline, closer to the release of the software—for example, before or after the deployment to a test environment.

The downside of the first approach is that executing integration tests this way will mean that the tests will no longer work as a quality gate before code is merged to the master. They will, of course, continue working as a quality-reporting mechanism. This means that, while errors might be merged, they will be detected and reported by the build.

The second approach does not have this risk since executing the tests is still part of the pipeline from source control to production; however, in this approach, the execution of the tests might be deferred to a later moment in time if not every build enters at least part of the release pipeline. This means that defects might become visible later on, extending the time between detecting and fixing an issue.

In either approach, failing integration tests will no longer block merging changes and you hence have to find another way to ensure that developers will take responsibility for fixing the defect that caused the tests to fail.

These trade-offs become even more evident with system tests, which often take so long that it is not possible to make them part of the continuous integration build.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.211.87