Chapter 10. Continuous Integration

In software engineering, continuous integration (CI) can be defined as the practice of merging all developer working copies with a shared repository several times a day. It performs automated unit tests in build servers to improve software quality through frequent small efforts.

WebPageTest can be integrated into a CI pipeline to test web pages in the build or staging server. It can be used to indicate when the performance of web pages has regressed. Such integration can be done by customizing the running and reading of tests, as described in Chapter 9.

A common workflow would be to run WebPageTest after the CI pipeline successfully builds and all unit tests pass. Using either polling or pingback to retrieve WebPageTest results, some metrics from the full results set should be compared against expected metrics. For example, data.median.firstView.firstPaint must be less than 800 ms, or data.median.firstView.domElements must be between 800 and 1,000.

In this chapter, you will first learn how to consume WebPageTest API endpoints via the command line or as a Node.js application. You will also learn how to easily automate the whole process of running a test and reading its results in order to integrate with some popular CI tools.

Node.js Wrapper

webpagetest is a Node.js package available on NPM (package manager for Node.js). It provides a wrapper around the WebPageTest RESTful API with the following features:

  • Normalizes API endpoints and parameter names with JSON response

  • Command-line tool with both short and long options

  • Methods with asynchronous callback function for Node.js applications

  • Polling and pingback helpers to run tests and read results synchronously

  • Command-line batch jobs

  • WebPageTest RESTful API proxy

  • WebPageTest scripting helper

  • CI test specs

The webpagetest Node.js wrapper is an open souce project under MIT license and lives on GitHub at https://github.com/marcelduran/webpagetest-api.

Installing the WebPageTest Node.js Wrapper

Assuming Node.js is already installed, type the following at the command prompt:

 npm install webpagetest -g

The -g is required to make the command line available.

Once the WebPageTest Node.js Wrapper is installed, you have the command line available in your terminal and can get more information by typing:

 webpagetest --help

Choosing Your WebPageTest Server

The default WebPageTest API Wrapper server is the public instance (www.webpagetest.org), but you can override it in the command line by doing one of two things:

  • Setting the -s, –server server option—for example, webpagetest -s wpt-private-server.com

  • Setting the WEBPAGETEST_SERVER environment variable—for example, export WEBPAGETEST_SERVER=wpt-private-server.com

As a Node.js module, the default WebPageTest server is also the public instance and can be overridden by specifying the first parameter of the constructor:

var WebPagetest = require('webpagetest');

var publicWPT = new WebPagetest();

var privateWPT = new WebPagetest('wpt-private-server.com');

Even when a WebPageTest server is specified, you can still override it with any method by supplying the server option:

var wpt = new WebPagetest('wpt-private-server.com');

wpt.getLocations({server: 'another-wpt-server.com'}, function(err, data) {
  console.log(err || data);
});

Specifying the API Key

To specify the API key in the command line in order to run a test, set the -k, –key api_key as follows:

 webpagetest -k API_KEY_GOES_HERE test http://www.example.com

As a Node.js module, it can be set either as the second parameter in the constructor function or as an option in the runTest function:

var wpt = new WebPagetest('wpt-private-server.com', 'API_KEY_GOES_HERE');

// run test on wpt-private-server.com with a given API key
wpt.runTest('http://www.example.com', function(err, data) {
  console.log(err || data);
});

// run test on wpt-private-server.com with another given API key
wpt.runTest('http://www.example.com', {key: 'ANOTHER_API_KEY'}, function(
   err, data) {
  console.log(err || data);
});

Running the Tests and Reading the Results

Following the examples from Chapter 9, testing with the WebPageTest API Wrapper is cleaner and easier.

Running tests from the command line

To test the web performance of http://www.example.com on a WebPageTest public instance using an API key with default configuration:

 webpagetest test http://www.example.com -k API_KEY_GOES_HERE

Or with long parameter names:

 webpagetest test http://www.example.com --key API_KEY_GOES_HERE

Here’s the same test but with the following configuration:

  • Run from San Francisco location

  • Use latest Chrome

  • Use DSL connectivity profile

  • Run three times

  • First view only (for each run)

  • Capture video

  • Set “Using WebPageTest” as test label

  • Capture DevTools Timeline information

webpagetest test http://www.example.com -k API_KEY_GOES_HERE -l 
SanFrancisco:Chrome -y DSL -r 3 -f -v -L "Using WebPageTest" -M

Or with long parameter names:

webpagetest test http://www.example.com --key API_KEY_GOES_HERE --location 
SanFrancisco:Chrome --connectivity DSL --runs 3 --first --video --label 
"Using WebPageTest" --timeline

Batch jobs can be run in parallel while the response follows the same order as in a given input file. Assuming jobs.txt has the following content:

test http://www.example.com -k API_KEY_GOES_HERE
webpagetest test http://www.example.com -k API_KEY_GOES_HERE -l 
SanFrancisco:Chrome -y DSL -r 3 -f -v -L "Using WebPageTest" -M

Then from the command line, type:

 webpagetest batch jobs.txt

The test command also supports a WebPageTest script file as input instead of a URL. Assuming sample.wptscript has the following content:

logData 0
navigate http://www.example.com/login
logData 1
setValue name=username johndoe
setValue name=password 12345
submitForm action=http://www.example.com/main
waitForComplete

Then from command line, type:

 webpagetest test sample.wptscript

Reading results from the command line

To read results, assuming any of the above test commands returned 150109_DE_ZW7 as testId, type:

 webpagetest results 150109_DE_ZW7

Running tests and reading results from the command line

Since running a test and then reading its results is the most common WebPageTest workflow, the Node.js wrapper provides polling and pingback mechanisms. Here is an example that requests that a WebPageTest public instance test the web performance of http://www.example.com using the default test configuration, and then start polling every five seconds (default interval that can be overriden if a number in seconds is provided for the --poll parameter):

 webpagetest test http://www.example.com -k API_KEY_GOES_HERE --poll

Here’s the same example pinging back to a private instance of WebPageTest, because a public instance wouldn’t be able to pingback localhost:

 webpagetest test http://www.example.com -s wpt-private-server.com --wait

For both of these tests, --timeout could be provided (in seconds) to either stop polling or abandon waiting for pingback.

Running tests and reading results from a Node.js module

All methods are asynchronous; i.e., they require a callback function that is executed when the WebPageTest API response is received with either data or an error. Unlike with the command line, method names on the Node.js module are verbose (e.g., getTestResults versus results) for code readability.

The following example tests the web performance of http://www.example.com on a WebPageTest public instance using an API key with the default configuration, and then polls results every five seconds, getting the first-paint time for first view:

var WebPagetest = require('webpagetest');

var wpt = new WebPagetest('www.webpagetest.org', 'API_KEY_GOES_HERE');

wpt.runTest('http://www.example.com', function(err, res) {
  if (err || res.statusCode >= 400) {
    return console.log(err || res.statusText);
  }
  function results(err, res) {
    if (res.statusCode < 200) {
      console.log('Test', res.data.id, 'not ready yet. Trying again in 5s');
      setTimeout(wpt.getTestResults.bind(wpt, res.data.id, results), 5000);
    } else if (res.statusCode == 200) {
      console.log('First Paint:', res.data.median.firstView.firstPaint);
    }
  }
  console.log('Test', res.data.testId, 'requested. Start polling in 5s');
  setTimeout(wpt.getTestResults.bind(wpt, res.data.testId, results), 5000);
});

This could be simplified using the pollResults option:

var WebPagetest = require('webpagetest');

var wpt = new WebPagetest('www.webpagetest.org', 'API_KEY_GOES_HERE');

wpt.runTest('http://www.example.com', {pollResults: 5}, function(err, res) {
  console.log(err || 'First Paint: ' + res.data.median.firstView.firstPaint);
});

Similarly, pingback coud also be used in the previous example:

var WebPagetest = require('webpagetest'),
    os = require('os'),
    url = require('url'),
    http = require('http');

var wpt = new WebPagetest('wpt-private-server.com');

// Local server to listen for test complete.
var localServer = http.createServer(function(req, res) {
  var uri = url.parse(req.url, true);

  res.end();

  // Get test results.
  if (uri.pathname === '/testdone' && uri.query.id) {
    localServer.close(function() {
      wpt.getTestResults(uri.query.id, function(err, res) {
        console.log(err || 'First Paint: ' + res.data.median.firstView.firstPaint);
      });
    });
  }
});

// Test http://www.example.com.
wpt.runTest('http://www.example.com', {
  pingback: url.format({
    protocol: 'http',
    hostname: os.hostname(),
    port: 8080,
    pathname: '/testdone'
  })
}, function(err, data) {
  // Listen for test complete (pingback).
  localServer.listen(8080);
});

Or to make it even simpler, use the waitResults option:

var WebPagetest = require('webpagetest');

var wpt = new WebPagetest('wpt-private-server.com');

wpt.runTest('http://www.example.com', {waitResults: 'auto'}, function(err, res) {
  console.log(err || 'First Paint: ' + res.data.median.firstView.firstPaint);
});

By setting auto to waitResults, the WebPageTest Node.js Wrapper uses system hostname as the hostname and 8000 as the port, which is incremented by 1 in case the port is in use.

Tip

In the previous examples, the pingback URL must be reachable from the private WebPageTest server, aliased as wpt-private-server.com.

The timeout option is also available for both pollResults and waitResults functions.

To avoid the error-prone hassle of tabs versus spaces, the WebPageTest API Wrapper provides a script builder function named scriptToString:

var script = wpt.scriptToString([
  {logData: 0},
  {navigate: 'http://www.example.com/login'},
  {logData: 1},
  {setValue: ['name=username', 'johndoe']},
  {setValue: ['name=password', '12345']},
  {submitForm: 'action=http://www.example.com/main'},
  'waitForComplete'
]);
wpt.runTest(script, function (err, data) {
  console.log(err || data);
});

RESTful Proxy

The WebPageTest API Wrapper comes with a handy RESTful proxy (listener) that exposes WebPageTest API methods consistently. It means that all the benefits of methods, options, and JSON output from the WebPageTest API Wrapper can be easily reachable through RESTful endpoints.

API proxy endpoints follow the format:

/command[/main_parameter>][?parameter1=value1&parameter2=value2&…]

where:

  • command: One of the available commands (test, results, etc.) from the command line

  • main_parameter: Usually a test_id, url, or wpt_script

  • parameter=value: List of extra optional parameters—for example, key, first, etc.

Running a proxy from the command line

Assuming a WebPageTest private instance is located at wpt-private-server.com and a local machine named local-machine has bidirectional direct access:

 webpagetest listen --server wpt-private-server.com

This will turn the local machine into a WebPageTest API Wrapper RESTful proxy for wpt-private-server.com. From any other machine in the same network, WebPageTest can be accessed via RESTful proxy, such as:

http://local-machine/help

Displays the WebPageTest API Wrapper help (use http://local-machine/help/<command> to get help for a given command).

http://local-machine/locations

Fetches WebPageTest locations for wpt-private-server.com.

http://local-machine/test/http%3A%2F%2Fwww.example.com.com?first=true

Runs a test for http://www.example.com using default test configuration on wpt-private-server.com.

http://local-machine/results/150109_DE_ZW7

Fetches test results for an existing test with ID 150109_DE_ZW7 on wpt-private-server.com.

If --server or -s is not provided, the WebPageTest API Wrapper first checks the WEBPAGETEST_SERVER environment variable and falls back to the public instance, www.webpagetest.org.

Running a proxy from a Node.js module

The method for running a proxy from a Node.js module is called listen and has one optional port parameter (default 7791).

var WebPageTest = require('webpagetest');
var wpt = new WebPageTest();
wpt.listen(3000);

Asserting Metrics from Test Results

The WebPageTest API Wrapper introduces the concept of test specs. It allows any test result coming from the results command directly or from synchronous tests with --poll or --wait options to be asserted by comparing the actual result with expected results defined by a spec JSON string or file.

JSON Test Specs

The assertion test specs file follows the structure of the JSON output of the WebPageTest results command. Starting from data as the root node, it traverses the entire result tree looking for matching leaves from the test specs definition file.

As an example, assume that a JSON file named testspecs.json has the following test specs definition:

{
  "median": {
    "firstView": {
      "requests": 20,
      "render": 400,
      "loadTime": 3000,
      "score_gzip": {
        "min": 90
      }
    }
  }
}

If we run the following command to test the first view of http://staging.example.com using polling and specifying the previous test specs:

 webpagetest test http://staging.example.com --first --poll
  --specs testspecs.json

The test returns the following test results:

{
  "data": {
  ...
    "median": {
      "firstView": {
        ...
        "requests": 15
        "render": 500,
        "loadTime": 2500,
        "score_gzip": 70
        ...
      }
    },
  ...
  }
}

It is then compared to testspecs.json and the output is:

WebPageTest
    ✓ median.firstView.requests: 15 should be less than 20
    1) median.firstView.render: 500 should be less than 400
    ✓ median.firstView.loadTime: 2500 should be less than 3000
    2) median.firstView.score_gzip: 70 should be greater than 90

2 passing (3 ms)
2 failing

The exit status is:

echo $?
2

Defining Assertion Comparison

By default, all comparison operations are < (less than), except when an object is informed with min and/or max values. In this case, the operations used for comparison are > (greater than) and < (less than) when both min and max are informed that a range comparison is used.

Examples of overriding assertion comparison

Less-than comparison:

{ "median": { "firstView": {
  "render": 400
}}}

or

{ "median": { "firstView": {
  "render": { "max": 400 }
}}}

Greater-than comparison:

{ "median": { "firstView": {
  "score_gzip": { "min": 75 }
}}}

Range comparison:

{ "median": { "firstView": {
  "requests": { "min": 10, "max": 30 }
}}}

Setting Default Operations and Labels

It is possible to optionally define default operations and label templates inside the defaults property in the specs JSON file:

{
  "defaults": {
    "suiteName": "Performance Test Suite for example.com",
    "text": ": {actual} should be {operation} {expected} for {metric}",
    "operation": ">"
  },
  "median": { "firstView": {
    "score_gzip": 80,
    "score_keep-alive": 80
  }}
}

The test suite name and specs text label templates will be used in lieu of the predefined default ones. Using the previous test spec file should output:

Performance Test Suite for example.com
    1) 70 should be greater than 80 for median.firstView.score_gzip
    ✓ 100 should be greater than 80 for median.firstView.score_keep-alive

1 passing (3 ms)
1 failing

If the defaults property is omitted, the following properties are used:

  "defaults": {
    "suiteName": "WebPageTest",
    "text": "{metric}: {actual} should be {operation} {expected}",
    "operation": "<"
  }

Available Output Text Template Tags

{metric}

metric name—for example, median.firstView.loadTime

{actual}

The value returned from the actual test results—for example, 300

{operation}

The long operation name—for example, less than

{expected}

The defined expected value—for example, 200

Available Assertion Operations

<

Less than

>

Greater than

<>

Greater than and less than (range)

=

Equal to

Overriding Labels

Overriding individual spec labels is also possible by providing text in the spec object:

{ "median": { "firstView": {
  "loadTime": {
    "text": "page load time took {actual}ms and should be no more
      than {expected}ms",
    "max": 3000
  }
}}}

Which outputs:

WebPageTest
    ✓ page load time took 2500ms and should be no more than 3000ms

1 passing (2 ms)

Specifying Test Reporter

The WebPageTest API Wrapper test specs use Mocha to build and run a test suite. Once a test suite is done, a reporter formats and builds the output results. The following reporters are available:

  • dot (default)

  • spec

  • tap

  • xunit

  • list

  • progress

  • min

  • nyan

  • landing

  • json

  • doc

  • markdown

  • teamcity

Test Specs Examples

Asserting the results of a WebPageTest test varies because it depends on the key performance metrics you are measuring for your pages. The WebPageTest API Wrapper test specs provide several ways to assert any metric provided by the WebPageTest API. Following are some examples that you can adapt to your particular case.

Asserting by MIME type

By either running tests synchronously or just fetching results, it is possible to test by MIME type:

{
  "median": {
    "firstView": {
      "breakdown": {
        "js": {
          "requests": 6,
          "bytes": 200000
        },
        "css": {
          "requests": 1,
          "bytes": 50000
        },
        "image": {
          "requests": 10,
          "bytes": 300000
        }
      }
    }
  }
}

The preceding spec only allows up to 6 JavaScript requests summing up to 200 KB, 1 CSS request up to 50 KB, and no more than 10 images up to 300 KB total.

Asserting by processing breakdown

When runnning tests synchronously in Chrome with the --timeline option, it is possible to test by processing breakdown:

{
  "run": {
    "firstView": {
      "processing": {
        "RecalculateStyles": 1300,
        "Layout": 2000,
        "Paint": 800
      }
    }
  }
}

The preceding spec only allows up to 1,300 ms of recalculate styles, 2,000 ms of layout, and 800 ms of paint time processing. Thus, it avoids rendering regression once these metrics are known by measuring multiple times from previous tests.

Jenkins Integration

You can integrate the WebPageTest API Wrapper with Jenkins and other CI tools seamlessly. To do so, run commands to test synchronously with either --poll or --wait (if the Jenkins server is reachable from a private instance of the WebPageTest server), and specify a --specs file or JSON string with either tap or xunit as --reporter.

Configuring Jenkins

Jenkins expects the output of a test suite result in a known format so it can parse individual results and alert in case tests are not passing the expected results. Here are a couple of the most common reporters supported by Jenkins:

Using TAP as test resporter

The Test Anything Protocol (TAP) is a plug-in that can be installed via the Jenkins Plugin Manager. Assuming example.com has the following configuration:

  • Staging server: staging.corp.example.com

  • Jenkins server: jenkins.corp.example.com

  • WebPageTest private instance: wpt.corp.example.com

  • WebPageTest location named Default with Chrome browser

  • Jenkins has a /specs directory with test specs JSON files, with:

    /specs/homepage.json:

{
  "median": {
    "firstView": {
      "requests": 20,
      "render": 400,
      "loadTime": 3000,
      "score_gzip": {
        "min": 90
      }
    }
  }
}

The build shell command to be executed is:

webpagetest test http://staging.corp.example.com 
--server http://wpt.corp.example.com --first --location Default:Chrome 
--wait jenkins.corp.example.com:8000 --specs /specs/homepage.json 
--reporter tap > homepage.tap

Jenkins (the tool) has a “Post-build Actions” section where users should input homepage.tap as “Test results.” You can see a screenshot at http://bit.ly/wpt-jenkins.

Using JUnit as a test reporter

Using the same TAP example but without plug-ins, Jenkins can report JUnit by default with the following build shell command:

webpagetest test http://staging.corp.example.com 
--server http://wpt.corp.example.com --first --location Default:Chrome 
--wait jenkins.corp.example.com:8000 --specs /specs/homepage.json 
--reporter xunit > homepage.xml

Jenkins postbuild actions should publish a JUnit test result report for homepage.xml.

Travis-CI Integration

Similar to Jenkins integration, Travis-CI also requires that tests should be run synchronously via the --poll option, as it’s very unlikely that Travis-CI workers are reachable from private or public instances of WebPageTest servers. --specs is required to test the results, but --reporter is not as important, because Travis-CI relies on the exit status rather than the output format as like Jenkins does.

Configuring Travis-CI

The following is an example of a WebPageTest performance test for a contrived Node project in a GitHub public repo. Add a test script to the package.json file:

{
  "name": "example",
  "version": "0.0.1",
  "dependencies": {
    "webpagetest": ""
  },
  "scripts": {
    "test": "./node_modules/webpagetest/bin/webpagetest
             test http://staging.example.com
             --server http://webpagetest.example.com
             --key $WPT_API_KEY
             --first
             --location MYVM:Chrome
             --poll
             --timeout 60
             --specs specs.json
             --reporter spec"
  }
}

Note that line breaks were added to the test script for clarity; it should be in a single line.

This test script will:

  1. Schedule a test on a private instance of WebPageTest hosted on http://webpagetest.example.com, which must be publicly reachable from Travis-CI workers

  2. Use a WebPageTest API key from WPT_API_KEY (environment variable, see “Encrypting the WebPageTest API key”)

  3. Test http://staging.example.com, which must be publicly reachable from WebPageTest agents

  4. Run a test for first view only

  5. Run from location MYVM on Chrome browser

  6. Poll results every five seconds (default)

  7. Time out in 60 seconds if no results are available

  8. Test the results against the specs.json spec file

  9. Output using the spec reporter

Encrypting the WebPageTest API key

If you are scheduling your tests to run from public instances of Travis-CI workers, such as from a public GitHub repository, WebPageTest API keys (--key or -k) should be used to prevent abuse, but do not put unencrypted API keys in public files. Fortunately, Travis-CI provides an easy way to do this via secure environment variables, which avoid explicitly passing $WPT_API_KEY in the public .travis.yml file.

Install Travis and go to the repo directory:

gem install travis
...
cd repo_dir

Next, encrypt the WebPageTest API key as a secure environment variable:

travis encrypt WPT_API_KEY=super_secret_api_key_here --add

Note that it must run from the repo directory or use -r or --repo to specify the repo name in the format user/repo—for example, marcelduran/webpagetest-api.

By default, the --add flag will append the encrypted string to the .travis.yml file as:

env:
  global:
    - secure: +_encrypted WPT_API_KEY=super_secret_api_key_here string\_+

In this chapter, we covered how the WebPageTest API can be integrated into your web development pipeline via CI. It helps you leverage the quality of your web pages by preventing key performance metrics from regressing during push cycles. It can also help you track some performance metrics values over time so you can measure the impact of adding new features to the page. Once WebPageTest is integrated into your CI tool, after several pushing cycles you start getting a better idea of the state of perfomance of your pages. Data collected from CI can be used to plot historical information about your pages’ performance. It can catch unexpected regressions in non–performance-related expected changes. Some metrics can be easy to track and catch, such as the number of requests, but some, especially those related to time, can require some tuning to find an optimum range.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.142.194