Lesson 6: Testing in AngularJS

In this Lesson, we will cover the following recipes:

  • Configuring and running your test environment in Yeoman and Grunt
  • Understanding Protractor
  • Incorporating E2E tests and Protractor in Grunt
  • Writing basic unit tests
  • Writing basic E2E tests
  • Setting up a simple mock backend server
  • Writing DAMP tests
  • Using the Page Object test pattern

Introduction

Since its inception, AngularJS has always been a framework built with maximum testability in mind. Developers are often averse to devoting substantial time towards creating a test suite for their application, yet we all know only too well how wrong things can go when untested or partially tested code is shipped to production.

One could fill an entire book with the various tools and methodologies available for testing AngularJS applications, but a pragmatic developer likely desires a solution that is uncomplicated and gets out of the way of the application's development. This Lesson will focus on the most commonly used components and practices that are at the core of the majority of test suites, as well as the best practices that yield the most useful and maintainable tests.

Furthermore, preferred testing utilities have evolved substantially over the AngularJS releases spanning the past year. This Lesson will only cover the most up-to-date strategies used for AngularJS testing.

Note

The AngularJS testing ecosystem is incredibly dynamic in nature. It would be futile to attempt to describe the exact methods by which you can set up an entire testing infrastructure as their components and relationships constantly evolve, and will certainly differ as the core team continues to churn out new releases. Instead, this Lesson will describe the supporting test software setup from a high level and the test syntax at the code level of detail. I will add errata and updates to this Lesson at https://gist.github.com/msfrisbie/b0c6eceb11adfbcbf482.

Configuring and running your test environment in Yeoman and Grunt

The Yeoman project is an extremely popular scaffolding tool that allows the quick startup and growth of an AngularJS codebase. Bundled in it is Grunt, which is the JavaScript task runner that you will use in order to automate your application's environment, including running and managing your test utilities. Yeoman will provide much of your project structure for you out of the box, including but not limited to the npm and Bower dependencies and also the Gruntfile, which is the file used for the definition of the Grunt automation.

How to do it…

There is some disagreement over the taxonomy of test types, but with AngularJS, the tests will fall into two types: unit tests and end-to-end tests. Unit tests are the black-box-style tests where a piece of the application is isolated, has external components mocked out for simulation, is fed controlled input, and has its functionality/output verified. End-to-end tests simulate proper application-level behavior by simulating a user interacting with components of the application and making sure that they operate properly by creating an actual browser instance that loads and executes your application code.

Using the right tools for the job

AngularJS unit tests utilize the Karma test runner to run unit tests. Karma has long been the gold standard for AngularJS tests, and it integrates well with Yeoman and Grunt for automatic test file generation and test running. Much of the setup for Karma unit testing is already done for you with Yeoman.

Formerly, AngularJS provided a tool called the Angular Scenario Runner to run end-to-end tests. This is no longer the case; a modern test suite will now utilize Protractor, which is a new end-to-end testing framework built specifically for AngularJS. Protractor currently does not come configured by default when bootstrapping AngularJS project files, so a manual integration of it into your Gruntfile will be necessary.

Conveniently, both Karma unit tests and Protractor end-to-end tests utilize the Jasmine test syntax.

Both Karma and Protractor will require *.conf.js files, which will act as the test suite directors when invoked by Grunt. Protractor installation requires manual work, which is provided in detail in the Incorporating E2E tests and Protractor in Grunt recipe.

How it works…

Once the testing is set up, running and evaluating your test suite is simple. Karma and Protractor will run separately, one after the other (depending on which comes first in the grunt test task). Each of them will spawn some form of browser in which they will perform the tests. Karma will generally utilize PhantomJS to run the unit tests in a headless browser, and Protractor will utilize Selenium WebDriver to spawn an actual browser instance (or instances, depending on how it is configured) and run the end-to-end tests on your actual application that is running in the browser, which you will be able to see happening if it is running on your local environment.

Note

Downloading the example code

The code files for all the four parts of the course are available at https://github.com/shinypoojary09/AngularJS_Course.git.

There's more…

After running the test suite, the console output of Grunt will inform you of any test failures and other metadata about the test run. The output of a successfully run test suite, both unit tests and end-to-end tests with no errors, will include something similar to the following:

Running "karma:unit" (karma) task
INFO [karma]: Karma v0.12.23 server started at http://localhost:8080/
INFO [launcher]: Starting browser PhantomJS
INFO [PhantomJS 1.9.7 (Mac OS X)]: Connected on socket sYgu4c8ZxNFs73zBe_xq with id 75044421
PhantomJS 1.9.7 (Mac OS X): Executed 3 of 3 SUCCESS (0.017 secs / 0.015 secs)

Running "protractor:run" (protractor) task
Starting selenium standalone server...
Selenium standalone server started at http://192.168.1.120:59539/wd/hub
.....

Finished in 7.965 seconds
5 tests, 19 assertions, 0 failures

Shutting down selenium standalone server.

Done, without errors.
Total 19.3s

Error messages in AngularJS are always getting better, and the AngularJS team is actively working to make failures easier to diagnose by providing detailed error messages and better stack traces. When a test fails, the string identifiers that Jasmine allows you to provide while writing the tests will quickly allow the developer who is running the tests to identify the problem. This is shown in the following error output:

Running "karma:unit" (karma) task
INFO [karma]: Karma v0.12.23 server started at http://localhost:8080/
INFO [launcher]: Starting browser PhantomJS
INFO [PhantomJS 1.9.7 (Mac OS X)]: Connected on socket HVy4JBfIMACzUGR8gPFY with id 29687037
PhantomJS 1.9.7 (Mac OS X) Controller: HandleCtrl Should mark handles which are too short as invalid FAILED
    Expected false to be true.
PhantomJS 1.9.7 (Mac OS X): Executed 3 of 3 (1 FAILED) (0.018 secs / 0.014 secs)
Warning: Task "karma:unit" failed. Use --force to continue.

Aborted due to warnings.

See also

  • The Understanding Protractor recipe provides greater insight into what the Protractor test runner really is
  • The Incorporating E2E tests and Protractor in Grunt recipe gives a thorough explanation of how to set up your test suite in order to use Protractor as its end-to-end test runner

Understanding Protractor

Protractor is new to the scene in AngularJS and is intended to fully supplant the now deprecated Angular Scenario Runner.

How it works…

Selenium WebDriver (also referred to as just "WebDriver") is a browser automation tool that provides faculties to script the control of web browsers and the applications that run within them. For the purposes of end-to-end testing, the test runner manifests as three interacting components, as follows:

  • The formal Selenium WebDriver process, which takes the form of a standalone server with the ability to spawn a browser instance and pipe native events into the page
  • The test process, which is a Node.js script that runs and checks all the test files
  • The actual browser instance, which runs the application

Protractor is built on top of WebDriver. It acts as both an extension of WebDriver and also provides supporting software utilities to make end-to-end testing easier. Protractor includes the webdriver-manager binary, which exists to make the management of WebDriver easier.

There's more…

Within the tests themselves, Protractor exports a couple of global variables for you to use, which are as follows:

  • browser: This exists to enable you to interact with the URL of the page and the page source. It acts as a WebDriver wrapper, so anything that WebDriver does, Protractor can do too.
  • element: This enables you to interact with specific elements in the DOM using selectors. Besides standard CSS selectors, this also allows you to select the elements with a specific ng-model directive or binding.

See also

  • The Incorporating E2E tests and Protractor in Grunt recipe gives a thorough explanation of how to set up your test suite in order to use Protractor as its end-to-end test runner
  • The Writing basic E2E tests recipe demonstrates how to build an end-to-end test foundation for a simple application

Incorporating E2E tests and Protractor in Grunt

Out of the box, Yeoman does not integrate Protractor into its test suite; doing so requires manual work. The Grunt Protractor setup is extremely similar to that of Karma, as they both use the Jasmine syntax and *.conf.js files.

Note

This recipe demonstrates the process of installing and configuring Protractor, but much of this can be generalized to incorporate any new package into Grunt.

Getting ready

The following is a checklist of things to do in order to ensure that your test suite will run correctly:

  • Ensure that the grunt-karma extension is installed using the npm install grunt-karma --save-dev command
  • Save yourself the trouble of having to list out all the needed Grunt tasks in your Gruntfile by automatically loading them, as follows:
    • Install the load-grunt-tasks module using the npm install load-grunt-tasks --save-dev command
    • Add require('load-grunt-tasks')(grunt); inside the module.exports function in your Gruntfile

How to do it…

Adding Protractor to your application's test configuration requires you to follow a number of steps in order to get it installed, configured, and automated.

Installation

Incorporating Protractor into Grunt requires the following two npm packages to be installed:

  • protractor
  • grunt-protractor-runner

They can be installed by being added to the package.json file and by running npm install. Alternately, they can be installed from the command line as follows:

npm install protractor grunt-protractor-runner --save-dev

The --save-dev flag will automatically add the packages to the devDependencies object in package.json if it is present.

Selenium's WebDriver manager

Protractor requires Selenium, a web browser automation tool, to operate. The previous commands will have already incorporated the needed dependencies into your package.json file. As a convenience, you should bind the Selenium WebDriver update command to run when you invoke npm install. This can be accomplished by adding the highlighted line of the following code snippet (the path to the webdriver-manager binary might differ in your local environment):

(package.json)

{
  "devDependencies": {
    // long list of node package dependencies
  },
  "scripts": {
    // additional existing script additions may be listed here
    "install": "node node_modules/protractor/bin/webdriver-manager update"
  }
}

The order in which the dependencies are listed is not important.

Note

JSON does not support comments; they are shown in the preceding code only to provide you context within the file. Attempting to provide a JSON file with JavaScript-style comments in it to the npm installer will cause the installer to fail.

Modifying your Gruntfile

Grunt needs to be informed of where to look for the Protractor configuration file as well as how to use it now that the npm module has been installed. Modify your Gruntfile.js file as follows:

(Gruntfile.js)

module.exports = function (grunt) {

  ...

  // Define the configuration for all the tasks
  grunt.initConfig({

    // long list of configuration options for
    // grunt tasks like minification, JS linting, etc.

    protractor: {
      options: {
        keepAlive: true,
        configFile: "protractor.conf.js"
      },
      run: {}
    }
  }

If this is done correctly, it should enable you to call protractor:run within a Grunt task.

In order to run Protractor and the E2E test suite when you invoke the grunt test command, you must extend the relevant Grunt task, as follows:

(Gruntfile.js)

grunt.registerTask('test', [
  // list of subtasks to run during `grunt test`
  'karma',
  'protractor:run'
]);

The order of these tasks is not set in stone, but karma and protractor:run must be ordered to follow any tasks that are involved with the setup of the test servers; so it is prudent to list them last.

Setting your Protractor configuration

Obviously, the Protractor configuration you just set in the Gruntfile refers to a file that doesn't exist yet. Create the protractor.conf.js file and add the following:

(protractor.conf.js)

exports.config = {
  specs: ['test/e2e/*_test.js'],
  baseUrl: 'http://localhost:9001',
  // your filenames, versions, and paths may differ
  seleniumServerJar: 'node_modules/protractor/selenium/selenium-server-standalone-2.42.2.jar',
  chromeDriver: 'node_modules/protractor/selenium/chromedriver'
}

This points Protractor to your test directory(ies), the Yeoman baseUrl that acts as the default test port (9001), and the Selenium server and browser setup files. This Protractor configuration will boot a new instance of a Selenium server every time you run tests, run the E2E tests in the Chrome browser, and strip it down when the tests have finished running.

Running the test suite

If all of these steps were successfully accomplished, running grunt test should pound out your entire test suite.

How it works…

Much of the power and utility that Grunt has to offer stems from its modular automation topology. The setup you just configured works roughly as follows:

  1. The grunt test command is run from the command line.
  2. Grunt matches the test to its corresponding task definition in the Gruntfile.js file.
  3. The tasks defined within the test are run sequentially, eventually coming to the protractor:run entry.
  4. Grunt runs protractor:run and matches this to the Protractor configuration definition, which resides in the protractor.conf.js file.
  5. Protractor locates protractor.conf.js, which at a minimum tells Grunt how to boot a Selenium server, where to find the test files, and the location of the test server.
  6. All found tests are run.

See also

  • The Understanding Protractor recipe provides greater insight into what the Protractor test runner really is
  • The Writing basic E2E tests recipe demonstrates how to build an end-to-end test foundation for a simple application

Writing basic unit tests

Unit tests should be the foundation of your test suite. Compared to end-to-end tests, they are generally faster, easier to write, easier to maintain, require less overhead while setting up, more readily scale with the application, and provide a more obvious path to the problem area of the application when you debug a failed test run.

There is a surplus of extremely simplistic testing examples available online and rarely do they present a component or test case that is applicable in a real-world application. Instead, this recipe will jump directly to an understandable application component and show you how to write a full set of tests for it.

Getting ready

For this recipe, it is assumed that you have correctly configured your local setup so that Grunt will be able to find your test file(s) and run them on the Karma test runner.

Suppose that you have the following controller within your application:

(app.js)

angular.module('myApp')
.controller('HandleCtrl', function($scope, $http) {
  $scope.handle = '';
  $scope.$watch('handle', function(value) {
      if (value.length < 6) {
          $scope.valid = false;
      } else {
          $http({
              method: 'GET',
              url: '/api/handle/' + value
          }).success(function(data, status) {
              if (status == 200 &&
                  data.handle == $scope.handle &&
                  data.id === null) {
                  $scope.valid = true;
              } else {
                  $scope.valid = false;
              }
          });
      }
  });
});

In this example application, a user named Jake Hsu will go through a signup flow and attempt to select a unique handle. In order to guarantee the selection of a unique handle while still in the signup flow, a scope watcher is set up against the server to check whether that handle already exists. Through a mechanism outside the controller (and presumably in the view), the value of $scope.handle will be manipulated, and each time its value changes, the application will send a request to the backend server and set $scope.valid based on what the server returns.

How to do it…

An exhaustive set of unit tests for something like the situation mentioned in the previous section can become quite lengthy. When writing tests for a production application, rarely is it prudent to spend time to create an exhaustive set of unit tests for a component, unless it is critical to the application (payments and authentication come to mind).

Here, it is probably sufficient to create a set of tests that attempts to cover scenarios that mark a handle as invalid on the client side, invalid on the server side, and valid on the server side.

Initializing the unit tests

Before writing the actual tests, it is necessary to create and mock the external components that the test component will interact with. This can be done as follows:

(handle_controller_test.js)

// monolithic test suite for HandleCtrl
describe('Controller: HandleCtrl', function() {
  // the components to be tested reside in the myApp module
  // therefore it must be injected
  beforeEach(module('myApp'));

  // values which will be used in multiple closures
  var HandleCtrl, scope, httpBackend, createEndpointExpectation;

  // this will be run before each it(function() {}) clause
  // to create or refresh the involved components
  beforeEach(inject(function($controller, $rootScope, $httpBackend) {

    // creates the mock backend server
    httpBackend = $httpBackend;

    // creates a fresh scope
    scope = $rootScope.$new();

    // creates a new controller instance and inserts
    // the created scope into it
    HandleCtrl = $controller('HandleCtrl', {
      $scope: scope
    });

    // configures the httpBackend to match outgoing requests
    // that are expected to be generated by the controller
    // and return payloads based on what the request contained;
    // this will only be invoked when needed
    createEndpointExpectation = function() {
      // URL matching utilizes a simple regex here
      // expectGET requires that a request be created
      httpBackend.expectGET(//api/handle/w+/i).respond(
        function(method, url, data, headers){
          var urlComponents = url.split("/")
            , handle = urlComponents[urlComponents.length - 1]
            , payload = {handle: handle};

          if (handle == 'jakehsu') {
            // handle exists in database, return ID
            payload.id = 1;
          } else {
            // handle does not exist in database
            payload.id = null;
          };

          // AngularJS allows for this return format;
          // [status code, data, configuration]
          return [200, payload, {}];
        }
      );
       };
  }));

  // configures the httpBackend to check that the mock
  // server did not receive extra requests or did not
  // see a request when it should have expected one
  afterEach(function() {
    // verify that all expect<HTTPverb>() expectations were filled
    httpBackend.verifyNoOutstandingExpectation();
    // verify that the mock server did not receive requests it
    // was not expecting
    httpBackend.verifyNoOutstandingRequest();
  });

  // unit tests go here

});

Creating the unit tests

With the unit test initialization complete, you will now be able to formally create the unit tests. Each it(function() {}) clause will count as one unit test towards the counted total, which can be found in the grunt test readout. The unit test is as follows:

(handle_controller_test.js)

// describe() serves to annotate what the module will test
describe('Controller: HandleCtrl', function() {

  // unit test initialization
  beforeEach( ... );
  afterEach( ... );

  // client invalidation unit test
  it('Should mark handles which are too short as invalid', 
    function() {
      // attempt test handle beneath the character count floor
      scope.handle = 'jake';
      // $watch will not be run until you force a digest loop
      scope.$apply();
      // this clause must be fulfilled for the test to pass
      expect(scope.valid).toBe(false);
    }
  );

  // client validation, server invalidation unit test
  it('Should mark handles which exist on the server as invalid', 
    function() {
      // server is set up to expect a specific request
      createEndpointExpectation();
      // attempt test handle above character count floor,
      // but which is defined in the mock server to have already
      // been taken
      scope.handle = 'jakehsu';
      // force a digest loop
      scope.$apply();
      // the mock server will not return a response until 
      // flush() is invoked
      httpBackend.flush();
      // this clause must be fulfilled for the test to pass
      expect(scope.valid).toBe(false);
    }
  );

  // client validation, server invalidation unit test
  it('Should mark handles available on the server as valid', 
    function() {
      // server is set up to expect a specific request
      createEndpointExpectation();
      // attempt handle above character floor and
      // which is defined to be available on the mock server
      scope.handle = 'jakehsu123';
      // force a digest loop
      scope.$apply();
      // return a response
      httpBackend.flush();
      // this clause must be fulfilled for the test to pass
      expect(scope.valid).toBe(true);
    }
 );

How it works…

Each unit test describes the sequential components that describe a scenario that the application is supposed to handle. Though the JavaScript that is natively executed in the browser is heavily asynchronous, the unit test faculties provide a great deal of control over these operations such that you can control the completion of asynchronous operations, and therefore test your application's handling of it in different ways. The $http and $digest cycles are both components of AngularJS that are expected to take indeterminate amounts of time to complete. Here though, you are given fine-grained control over their execution, and it is to your advantage to incorporate that ability into the test suite for more extensive test coverage.

Initializing the controller

To test the controller, it and the components it uses must be created or mocked. Creating the controller instance can be easily accomplished with $controller(), but in order to test how it handles scope transformations, it must be provided with a scope instance. Since all scopes prototypically inherit from $rootScope, it is sufficient here to create an instance of $rootScope and provide that as the created controller's scope.

Initializing the HTTP backend

Mocking a backend server can at times seem to be tedious and verbose, but it allows you to very precisely define how your single-page application is expected to interact with remote components.

Here, you invoke expectGET() with a URL regex in order to match an outgoing request generated by the controller. You are able to define exactly what happens when that URL sees a request come through, much in the same way that you would when you build a server API.

Here, it is prudent to encapsulate all the backend endpoint initialization within a function because its definition specifies how the application controller must behave for the test to pass. The $httpBackend service offers expect<HTTPverb>() and when<HTTPverb>() for use, and together they allow powerful unit test definition. The expect() methods require that they see a matching request to the endpoint during the unit test, whereas the when() methods merely enable the mock backend to appropriately handle that request. At the conclusion of each unit test, the afterEach() clause verifies that the mock backend has seen all the requests that it was expected to, using the verifyNoOutstandingExpectation() method, and that it didn't see any requests it wasn't expected to, using the verifyNoOutstandingRequest() method.

Formally running the unit tests

When running the unit tests, AngularJS makes no assumptions about how your application should or might behave with regard to interfacing with components that involve variable latent periods and asynchronous callbacks. The $watch expressions and $httpBackend will behave exactly as instructed and exactly when instructed.

By their nature, the $watch expressions can take a variable amount of time depending on how long it takes the model changes to propagate throughout the scope, and how many digest loops are required for the model to reach equilibrium. When you run a unit test, a scope change (as demonstrated here) will not trigger a $watch expression callback until $apply() is explicitly invoked. This allows you to use the intermediate logic and other modifications to be made in different ways to fully exercise the conditions under which a $watch expression might occur.

Furthermore, it should be obvious that a remote server cannot be relied upon to respond in a timely fashion, or even at all. When you run a unit test, requests can be dispatched to the mock server normally, but the server will delay sending a response and triggering the asynchronous callbacks until it is explicitly instructed to with flush(). In a similar fashion, the $watch expressions allow you to test the handling of requests that return normally or slowly, as malformed or failed, or time out altogether.

There's more…

Unit tests should be the core of your test suite as they provide the best assurance that the components of your application are behaving as expected. The rule of thumb is: if it's possible to effectively test a component with a unit test, then you should use a unit test.

Writing basic E2E tests

End-to-end tests effectively complement unit tests. Unit tests make no assumptions about the state of the encompassing systems (and thereby require manual work to mock or fabricate that state for the sake of simulation). Unit tests are also intended to test extremely small and often irreducible pieces of functionality. End-to-end tests take an orthogonal approach by creating and manipulating the system state via the means that are usually available to the client or end user and make sure that a complete user interface flow can be successfully executed. End-to-end test failures often cannot pinpoint the exact coordinate from which the error originated. However, they are absolutely a necessity in a testing suite since they ensure cooperation between the interacting application components and provide a safety net to catch the application's misbehavior that results from the complexities of a software interconnection.

Getting ready

This recipe will use the same application controller setup from the preceding recipe, Writing basic unit tests. Please refer to the setup instructions and code explained there.

In order to provide an interface to utilize the controller, the application will also incorporate the following:

(app.js)

angular.module('myApp', [
  'ngRoute'
])
  .config([
    '$routeProvider',
    function($routeProvider){
      $routeProvider
        .when('/signup', {
          templateUrl: 'views/main.html'
        })
        .otherwise({
          redirectTo: '/',
          template: '<a href="/#/signup">Go to signup page</a>'
        });
    }
  ]);

(views/main.html)

<div ng-controller="HandleCtrl">
  <input type="text" ng-model="handle" />
  <h2 id="success-msg" ng-show="valid">
    That handle is available!
  </h2>
  <h2 id="failure-msg" ng-hide="valid">
    Sorry, that handle cannot be used.
  </h2>
</div>

(index.html)
<body ng-app="myApp">
  <div ng-view=""></div>
</body>

Note

Take note that here, these files are only the notable pieces required for a working application that the Protractor test runner will use. You will need to incorporate these into a full AngularJS application for Protractor to be able to use them.

How to do it…

Your end-to-end test suite should cover all user flows as best as you can. Ideally, you will optimize for a balance between modularity, independence, and redundancy avoidance when you write tests. For example, each individual test probably doesn't need you to log out at the end of the test since this would only serve to slow down the completion of the tests. However, if you are writing E2E tests to verify that your application's authentication scheme prevents unwanted navigation after authentication credentials have been revoked. Then, an array of tests that test actions after logout would be very appropriate. The focus of your tests will vary depending on the style and purpose of your application, and also the bulk and complexity of the codebase behind it.

Since the protractor.conf.js file has been instructed to look for test files in the test/e2e/ directory, the following would be an appropriate test suite in that location:

(test/e2e/signup_flow_test.js)

describe('signup flow tests', function() {

  it('should link to /signup if not already there', function() {
    // direct browser to relative url, 
    // page will load synchronously
    browser.get('/');

    // locate and grab <a> from page
    var link = element(by.css('a'));

    // check that the correct <a> is selected 
    // by matching contained text
    expect(link.getText()).toEqual('Go to signup page');

    // direct browser to nonsense url
    browser.get('/#/hooplah');

    // simulated click
    link.click();

    // protractor waits for the page to render, 
    // then checks the url
    expect(browser.getCurrentUrl()).toMatch('/signup');
  });
});

describe('routing tests', function() {

  var handleInput,
      successMessage,
      failureMessage;

  function verifyInvalid() {
    expect(successMessage.isDisplayed()).toBe(false);
    expect(failureMessage.isDisplayed()).toBe(true);
  }

  function verifyValid() {
    expect(successMessage.isDisplayed()).toBe(true);
    expect(failureMessage.isDisplayed()).toBe(false);
  }

  beforeEach(function() {
    browser.get('/#/signup');

    var messages = element.all(by.css('h2'));

    expect(messages.count()).toEqual(2);

    successMessage = messages.get(0);
    failureMessage = messages.get(1);

    handleInput = element(by.model('handle'));

    expect(handleInput.getText()).toEqual('');

  })

  it('should display invalid handle on pageload', function() {

    verifyInvalid();

    expect(failureMessage.getText()).
      toEqual('Sorry, that handle cannot be used.');
  });

  it('should display invalid handle for insufficient characters', function() {

    // type to modify model and trigger $watch expression
    handleInput.sendKeys('jake');

    verifyInvalid();
  })

  it('should display invalid handle for a taken handle', function() {

    // type to modify model and trigger $watch expression
    handleInput.sendKeys('jakehsu');

    verifyInvalid();
  })

  it('should display valid handle for an untaken handle', function() {

    // type to modify model and trigger $watch expression
    handleInput.sendKeys('jakehsu123');

    verifyValid();
  })
});

How it works…

Protractor utilizes a Selenium server and WebDriver to fully render your application in the browser and to simulate a user interacting with it. The end-to-end test suite provides faculties for you to simulate native browser events in the context of an actual running instance of your application. The end-to-end tests verify correctness not by the JavaScript object state of the application, but rather by inspecting the state of either the browser or the DOM.

Since end-to-end tests are interacting with an actual browser instance, they must be able to manage asynchronicity and uncertainty during execution. To do this, each of the element selectors and assertions in these end-to-end tests return promises. Protractor automatically waits for each promise to get completed before continuing to the next test statement.

There's more…

AngularJS provides the ngMockE2E module, which allows you to mock a backend server. Incorporating the module gives you the ability to prevent the application from making actual requests to a server, and instead simulates request handling in a fashion similar to that of the unit tests. However, incorporating this module into your application is actually not recommended in many cases, for the following reasons:

  • Currently, integrating ngMockE2E correctly into your end-to-end test runner involves a lot of red tape and can cause problems involving synchronization with Protractor.
  • Mocking out the spectrum of end-to-end backend server responses in the ngMock syntax can become very tedious and verbose, as larger applications will demand more complexity in the mock server's response logic.
  • Mocking out the backend endpoints for end-to-end tests defeats much of the purpose of the tests in the first place. The end-to-end tests you write are intended to simulate all components of the application that bind and perform together properly in the context of the user interface. Creating fake responses from the server might ameliorate edge cases that involve backend communication that would otherwise be caught by tests that send requests to a real server.

Therefore, it is encouraged to structure your end-to-end tests in order to send requests to a legitimate backend in order to effectively and more realistically simulate client-server HTTP conversations.

See also

  • The Setting up a simple mock backend server recipe demonstrates a clever method that will allow you to iterate quickly with your test suite and application
  • The Writing DAMP tests recipe demonstrates even more best practices for writing AngularJS tests effectively
  • The Using the Page Object test pattern recipe demonstrates even more best practices for writing AngularJS tests effectively

Setting up a simple mock backend server

It isn't hard to realize why having end-to-end tests that communicate with a real server that returns mock responses can be useful. Outside of the testing complexity that involves the business logic your application uses to handle the data returned from the server, the spectrum of possible outcomes when relying upon HTTP communication (timeouts, server errors, and more) should be included in a robust end-to-end test suite. It's no stretch of the imagination then that a superb way of testing these corner cases is to actually create a mock server that your application can hit. You can then configure the mock server to support different endpoints that will have predetermined behavior, such as failing, slow response times, and different response data payloads to name a few.

You are fully able to have your end-to-end tests communicate with the API as they normally would, as the end-to-end test runner does not mock the backend server by default. If this is suitable for your testing purposes, then setting up a mock backend server is probably unnecessary. However, if you wish for your tests to cover operations that are not idempotent or will irreversibly change the state of the backend server, then setting up a mock server makes a good deal of sense.

How to do it…

Selecting a mock server style has essentially no limitations as the only requirement is for it to allow you to manually configure responses upon expected HTTP requests. As you might imagine, this can get as simple or as complex as you want, but the nature of end-to-end testing tends to lead to frequent overhaul and repair of large pieces of the mock HTTP endpoints if they try and replicate large amounts of the production application logic.

If you are able to (and in most cases, you absolutely should be able to design or refactor your tests in such a way) have your end-to-end tests perform more concise application user flows and mock out the API that it communicates with as simply as possible, you should do it—usually, this mostly means hardcoding the responses. Enter the file-based API server!

(httpMockBackend.js)

// Define some initial variables.
var applicationRoot = __dirname.replace(/\/g,'/')
  , ipaddress = process.env.OPENSHIFT_NODEJS_IP || '127.0.0.1'
  , port = process.env.OPENSHIFT_NODEJS_PORT || 5001
  , mockRoot = applicationRoot + '/test/mocks/api'
  , mockFilePattern = '.json'
  , mockRootPattern = mockRoot + '/**/*' + mockFilePattern
  , apiRoot = '/api'
  , fs = require("fs")
  , glob = require("glob");

// Create Express application
var express = require('express');
var app = express();

// Read the directory tree according to the pattern specified above.
var files = glob.sync(mockRootPattern);

// Register mappings for each file found in the directory tree.
if(files && files.length > 0) {
  files.forEach(function(filePath) {

    var mapping = apiRoot + filePath.replace(mockRoot, '').replace(mockFilePattern,'')
      , fileName = filePath.replace(/^.*[\/]/, '');

    // set CORS headers so this can be used with local AJAX
    app.all('*', function(req, res, next) {
      res.header("Access-Control-Allow-Origin", "*");
      res.header(
        'Access-Control-Allow-Headers', 
        'X-Requested-With'
      );
      next();
    });

    // any HTTP verbs you might need
    [/^GET/, /^POST/, /^PUT/, /^PATCH/, /^DELETE/].forEach(
      function(httpVerbRegex) {

        // perform the initial regex of the HTTP verb 
        // against the filename
        var match = fileName.match(httpVerbRegex);

        if (match != null) {
          // remove the HTTP verb prefix from the filename
          mapping = mapping.replace(match[0] + '_', '');

          // create the endpoint
          app[match[0].toLowerCase()](mapping, function(req,res) {

            // handle the request by responding 
            // with the JSON contents of the file
            var data =  fs.readFileSync(filePath, 'utf8');
            res.writeHead(200, { 
              'Content-Type': 'application/json' 
            });
            res.write(data);
            res.end();
          });
        }
      }
    );

    console.log('Registered mapping: %s -> %s', mapping, filePath);
  });
} else {
  console.log('No mappings found! Please check the configuration.');
}

// Start the API mock server.
console.log('Application root directory: [' + applicationRoot +']');
console.log('Mock Api Server listening: [http://' + ipaddress + ':' + port + ']');
app.listen(port, ipaddress);

This is a simple node program that can be run using the following command:

$ node httpMockServer.js

Note

This Node.js program is dependent on several npm packages, which can be installed using the npm install glob fs express command.

How it works…

This simple express.js server conveniently matches the incoming request URLs to the corresponding JSON file in the test/mocks/api/ child directory, and it matches the HTTP verb of the request to the file prefixed with that verb. So, a GET request to localhost:5001/api/user will return the JSON contents of /test/mocks/api/GET_user.json, a PATCH request to localhost:5001/api/user/1 will return the JSON contents of /test/mocks/api/user/PATCH_1.json, and so on. Since files are automatically discovered and added to the express routing, this allows you to easily simulate a backend server with very different request types, quickly.

There's more…

This setup is obviously extremely limited in a number of ways, including conditional request handling and authentication, to name a few. This is not intended as a full replacement for a backend by any means, but if you are trying to quickly build a test suite or build a piece of your application that sits atop an HTTP API, you will find this tool very useful.

See also

  • The Writing E2E tests recipe demonstrates the core strategies that should be incorporated into your end-to-end test suite

Writing DAMP tests

Any seasoned developer will almost certainly be familiar with the Don't Repeat Yourself (DRY) programming principle. When architecting production applications, the DRY principle promotes improved code maintainability by ensuring that there is no logic duplication (or as little as feasibly possible) in order to allow efficient system additions and modifications.

Descriptive And Meaningful Phrases (DAMP) on the other hand promotes improved code readability by ensuring that there is not too much abstraction to cause the code to be difficult to understand, even if it is at the expense of introducing redundancy. Jasmine encourages this by providing a Domain Specific Language (DSL) syntax, which approximates how humans would linguistically declare and reason about how the program should work.

How to do it…

The following tests are a sample of unit tests from the Writing basic unit tests recipe, presented here unchanged:

  it('should display invalid handle for insufficient characters', function() {

    // type to modify model and trigger $watch expression
    handleInput.sendKeys('jake');

    verifyInvalid();
  })

  it('should display invalid handle for a taken handle', function() {

    // type to modify model and trigger $watch expression
    handleInput.sendKeys('jakehsu');

    verifyInvalid();
  })

As is, this would be considered a set of DAMP tests. A developer running these tests would have little trouble quickly piecing together what is supposed to happen, where in the code it's happening, and why the tests might be failing.

However, a DRY-minded developer would examine these tests, identify the redundancy between them, and refactor them into something like the following:

  it('should reject invalid handles', function() {
    // type to modify model and trigger $watch expression
    ['jake', 'jakehsu'].forEach(function(handle){
      handleInput.clear();
      handleInput.sendKeys(handle);
      verifyInvalid();
    });
  });

This code is definitely more in line with the DRY principle than the previous one, and the tests will still pass and still test the proper behavior, but there is already a measurable loss of information that hurts the quality of the tests. The initial version of the unit tests presented two test cases that were both supposed to be marked as invalid, but for different reasons—one because of a minimum handle length, one because the request to the mock server reveals that the handle is already taken. If one of those tests were to fail, the developer running them would be directed to the exact test case that was failing, would have good insight into which aspect of the validation was failing, and would be able to quickly act accordingly. In the DRY version of the unit tests, the developer running them would see a failed test, but since the two unit tests were condensed, it isn't immediately obvious which one of them is causing the failure or why it is failing. In this scenario, the DAMP tests are more conducive to rapidly locate and repair bugs that might crop up in the application.

There's more…

The example in this recipe is a relatively simple one, but it demonstrates the fundamental difference between the DAMP and DRY practices. In general, the rule of thumb is for production code to be as DRY as possible, and for test suites to be as DAMP as possible. Production code should be optimized for maintainability, and tests for understandability.

Perhaps counterintuitively, the DAMP principle is not necessarily mutually exclusive with the DRY principle—they are merely suited for different purposes. Unit and end-to-end tests should be DRYed wherever it will make the code more maintainable as long as it doesn't hurt the readability of the tests. Generally, this will fall under the setup and teardown routines for tests—use the DRY principle for these routines as much as possible, since they infrequently contain information or procedures that are relevant to the application component(s) that the test is covering. Authentication and navigation are both good examples of test setup/teardown that respond well to DRY refactoring.

See also

  • The Writing basic E2E tests recipe demonstrates the core strategies that should be incorporated into your end-to-end test suite
  • The Using the Page Object test pattern recipe demonstrates even more best practices for writing AngularJS tests effectively

Using the Page Object test pattern

Creating and maintaining a test suite for an application is a considerable amount of overhead, and a prudent developer will mold a test suite such that the normal evolution of a software application will not force developers to spend an unduly long amount of time to maintain the test code.

A surprisingly sensible design pattern called the Page Object pattern encapsulates segments of the page-specific user experience and abstracts it away from the logic of the actual tests.

How to do it…

The test/e2e/signup_flow_test.js file presented in the Writing basic E2E tests recipe can be refactored into the following files using the Page Object pattern.

The test/pages/main.js file can be refactored as follows:

(test/pages/main.js)

var MainPage = function () {
  // direct the browser when the page object is initialized
  browser.get('/');
};

MainPage.prototype = Object.create({},
  {
    // getter for element in page
    signupLink: {
      get: function() {
        return element(by.css('a'));
      }
    }
  }
);

module.exports = MainPage;

The test/pages/signup.js file can be refactored as follows:

(test/pages/signup.js)

var SignupPage = function () {
  // direct the browser when the page object is initialized
  browser.get('/#/signup');
};

SignupPage.prototype = Object.create({},
  {
    // getters for elements in the page
    messages: {
      get: function() {
        return element.all(by.css('h2'));
      }
    },
    successMessage: {
      get: function() {
        return this.messages.get(0);
      }
    },
    failureMessage: {
      get: function() {
        return this.messages.get(1);
      }
    },
    handleInput: {
      get: function() {
        return element(by.model('handle'));
      }
    },
    // getters for page validation
    successMessageVisibility: {
      get: function() {
        return this.successMessage.isDisplayed();
      }
    },
    failureMessageVisibility: {
      get: function() {
        return this.failureMessage.isDisplayed();
      }
    },
    // interface for page element
    typeHandle: {
      value: function(handle) {
        this.handleInput.sendKeys(handle);
      }
    }
  }
);

module.exports = SignupPage;

The test/e2e/signup_flow_test.js file can be refactored as follows:

(test/e2e/signup_flow_test.js)

var SignupPage = require('../pages/signup.js')
  , MainPage = require('../pages/main.js');

describe('signup flow tests', function() {

  var page;

  beforeEach(function() {
    // initialize the page object
    page = new MainPage();
  });

  it('should link to /signup if not already there', function() {

    // check that the correct <a> is selected 
    // by matching contained text
    // expect(link.getText()).toEqual('Go to signup page');
    expect(page.signupLink.getText()).toEqual('Go to signup page');

    // direct browser to nonsense url
    browser.get('/#/hooplah');

    // simulated click
    page.signupLink.click();

    // protractor waits for the page to render, 
    // then checks the url
    expect(browser.getCurrentUrl()).toMatch('/signup');
  });
});

describe('routing tests', function() {

  var page;

  function verifyInvalid() {
    expect(page.successMessageVisibility).toBe(false);
    expect(page.failureMessageVisibility).toBe(true);
  }

  function verifyValid() {
    expect(page.successMessageVisibility).toBe(true);
    expect(page.failureMessageVisibility).toBe(false);
  }

  beforeEach(function() {

    // initialize the page object
    page = new SignupPage();

    // check that there are two messages on the page
    expect(page.messages.count()).toEqual(2);

    // check that the handle input text is empty
    expect(page.handleInput.getText()).toEqual('');

  });

  it('should display invalid handle on pageload', function() {

    // check that initial page state is invalid
    verifyInvalid();

    expect(page.failureMessage.getText()).
      toEqual('Sorry, that handle cannot be used.');
  });

  it('should display invalid handle for insufficient characters', function() {

    // type to modify model and trigger $watch expression
    page.typeHandle('jake');

    verifyInvalid();
  })

  it('should display invalid handle for a taken handle', function() {

    // type to modify model and trigger $watch expression
    page.typeHandle('jakehsu');

    verifyInvalid();
  })

  it('should display valid handle for an untaken handle', function() {

    // type to modify model and trigger $watch expression
    page.typeHandle('jakehsu123');

    verifyValid();
  })
})

How it works…

It should be immediately obvious as to why this test pattern is desirable. Looking through the actual tests, you now do not need to know any information about the specifics of the page contents to understand how the test is manipulating the application.

The page objects take advantage of the second and optional objectProperties argument of Object.create() to build a very pleasant interface to the page. By using these page objects, you are able to avoid all of the nastiness of creating a sea of local variables to store references to the pieces of the page. They also offer a great deal of flexibility in terms of where the bulk of your test logic lies. These tests could potentially be refactored even more to move the validation logic into the page objects. Decisions like these are ultimately up to the developer, and it boils down to their preference in terms of how dense the page objects should be.

There's more…

In this example, the page object getter interface is especially useful since the nature of end-to-end tests implies that you will need to evaluate the page state at several checkpoints in the lifetime of the test, and a defined getter that performs this evaluation while appearing as a page object property yields an extremely clean test syntax.

Also note the multiple layers of indirection within the SignupPage object. Layering in this fashion is absolutely to your advantage, and the page object is a prime place in your end-to-end tests where it really does pay to be DRY. Repetitious location of elements on the page is not the place for verbosity!

See also

  • The Writing basic E2E tests recipe demonstrates the core strategies that should be incorporated into your end-to-end test suite
  • The Writing DAMP tests recipe demonstrates even more best practices for writing AngularJS tests effectively
See also
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.105.194