CHAPTER 4

Case Study and Gap Analysis

This chapter presents a case study that provides the basis for an evaluation of TaaS against the issues identified in the HPST survey. The evaluation framework is described and applied to the tools examined in the case study. Of the twelve tools introduced in the previous chapter, three were chosen as candidates for additional analysis: Sauce Labs, SOASTA CloudTest Lite, and BlazeMeter. The remainder of this chapter details the case study and provides an analysis of the gap between current TaaS solutions and industry needs.

4.1    TaaS TOOLS

From the summary tables in Chapter 3, the unavailability of detailed information on some the TaaS products becomes quite obvious. Of the companies that offered significant information about their product, only three promoted free, albeit limited, versions of their tools to potential consumers as a means to learn the tool and determine if it suits their needs. Surprisingly, very few companies offered trial periods with their products. More commonly, interested parties can request a demonstration of the tool as a means of better understanding its features and uses.

For the purpose of this case study, all three of the freely available tools were examined. Each is described in greater detail below, before being explored as part of the case study in Section 4.2. The descriptions in this section are based on the advertised features from the provider’s educational material and, to remain objective, have not yet been validated for functionality, quality, or usability. This section simply aims to offer a better understanding of the capabilities of each tool, building on the brief overview in Chapter 3.

4.1.1   SAUCE LABS (SAUCE ONDEMAND)

Sauce Labs [36] is a TaaS platform specializing in unit testing. Sauce Labs, sometimes referred to as Sauce OnDemand, became available in 2008. They offer the ability to test web applications using Selenium and JavaScript, and native and hybrid mobile applications using Appium. Additionally, Sauce Labs allows consumers to perform manual tests on over two hundred desktop and mobile platforms, reducing the need for in-house test labs for the purposes of manual testing and user acceptance testing. The available testing environments are listed in Table 4.1, where the numbers in the cells indicate the supported versions of the browser. The mobile environments (iOS and Android) have the same support for both phone and tablet versions.

Table 4.1: Sauce Labs supported environments

Images

Sauce Labs has a variety of packages to meet the needs of different teams. The packages vary in the number of testing minutes, the number of parallelized tests that can be run concurrently, and the number of distinct user accounts. The paid packages are shown in Figure 4.1 below. Subscriptions can be changed at any time, with new plans taking effect at the beginning of the next billing cycle. Some plans offer rollover minutes. Overages are billed at an additional $0.05 per minute for Windows, Linux, and Android or $0.13 per minute for Mac and iOS.

The free version, which was used in the case study, offers 30 manual minutes, 100 Windows/Linux/Android minutes, 40 Mac/iOS minutes, two parallelizations, and one user account, with minutes being per month. Each minute is one minute of real-time testing—if a test takes three minutes to run, three minutes are deducted from the user’s account. Sauce Labs provides bonus minutes to accounts, offering 50 additional minutes after the first test is run and 1,000 additional minutes in celebration of 50 automated test executions. Another free version is available to open source projects. Open Sauce allows unlimited public tests, but must be connected to a project stored in GitHub.

Sauce Labs uses Sauce Connect, a custom HTTP/S-like protocol that uses SSL for securely testing applications behind a firewall. They promise pristine virtual machines that are dismantled after every test to keep all tests and test data secure. During test runtime, Sauce Labs records screenshots, videos, and HTML logs to help with debugging and additional verifications. Additionally, sessions and videos can be shared amongst team members for better collaboration in the Small Team and Enterprise packages.

Users do not have to be skilled in writing automated tests to make use of Sauce Labs. The open source tool Selenium Builder [37] can directly connect to a user’s Sauce Labs account, allowing the recording of tests in Firefox using the Selenium Builder plugin. Tests are recorded in Selenium 1 or 2 individually or as test suites and can be run directly in Sauce OnDemand from the plugin. Alternatively, tests can also be exported as a Java, JUnit, or TestNG file for editing or use with other tools.

Images

Figure 4.1: Sauce Labs pricing.

Continuous integration is offered through external tools, including Jenkins, Bamboo, and Travis, with Hudson coming soon. Apache Maven is fully supported, allowing tests to be run on Sauce Labs’ platforms during the standard build process. Additionally, open source projects can be directly connected to an Open Sauce account.

4.1.2   SOASTA CloudTest LITE

SOASTA [38] was founded in 2006 to provide website and web application testing services. They offer a variety of tools including CloudTest and its free but limited counterpart CloudTest Lite. CloudTest is a functional, load, and performance testing tool built on Amazon Elastic Compute Cloud (EC2) that tests websites and web and mobile applications with global traffic. CloudTest can execute tests with up to millions of users and thousands of servers; however, CloudTest Lite is limited to a single server and up to 100 virtual users. Another tool, TouchTest, is packaged with CloudTest Lite and allows precise capturing of multi-touch gestures for playback on mobile devices, realistically testing iOS and Android applications.

According to SOASTA’s educational material, CloudTest Lite is intended for implementing tests early in the development cycle, working in a small-scale environment, verifying web application functionality, and allowing prospective customers to freely evaluate CloudTest’s basic capabilities. While larger programs should use the full version of CloudTest, pricing information is not available on the website.

Test composition, execution, and monitoring are all handled in one platform. Existing Apache JMeter tests can be imported directly into the SOASTA platform; alternatively, test scripts can be recorded through browsers, including Firefox and Chrome, using Windows, Mac, and Linux systems. Recorded test scripts are converted into test clips, where they can be edited to include data verifications. Clips can be combined and run concurrently or consecutively as test compositions. At runtime, results and data become immediately available.

SOASTA includes real-time analytics with all tools. Their custom OLAP engine provides the detailed data necessary to allow testers to explore live results during test execution, pinpointing bottlenecks and areas of stress within the web application. The system includes a built-in, end-to-end view of web performance, consolidated in charts, lists, and graphs. Performance data includes database servers, load balances, code, bandwidth, and end user response time.

In conjunction with CloudBees, a Java-based PaaS, SOASTA incorporated Jenkins into their offerings, allowing users to build, test, and deploy applications. Hudson can also be leveraged for continuous integration purposes.

4.1.3   BlazeMeter

BlazeMeter [39] is a load, performance, and stress testing tool also built on Amazon EC2 that allows users to simulate scenarios on websites, web and mobile applications, and web services. The tool is fully compatible with Apache JMeter, “the gold standard in open source performance testing,” allowing users to import entire suites of existing tests. BlazeMeter is highly scalable, allowing for over 300,000 concurrent users with up to 100 dedicated servers with geographically distributed loads.

BlazeMeter was founded by Alon Grimonsky in response to developers who demanded a cheaper and simpler alternative to load testing tools like HP LoadRunner and Gomez, which they claimed needed professional service engagements to properly deploy and maintain [40]. Grimonsky raised $1.2 million in venture funding and created BlazeMeter, which would accept the developers’ preexisting JMeter tests while offering more powerful and sophisticated tests for advanced web applications, competing against the more comprehensive solutions, including SOASTA. Grimonsky claims that BlazeMeter is the only tool capable of handling “complex testing simulations, unlimited testing capacity, interactive real-time reporting, and sophisticated result analysis and recommendations” [40].

In signing up for BlazeMeter, users automatically receive the free tier of service. The free account offers up to ten tests per month. Each test can, at maximum, extend for an hour’s time and have up to 50 concurrent users. The paid packages vary in maximum number of users, load servers, test hours per month, and maximum test duration. The monthly subscriptions also offer rollover minutes and may include unlimited sandbox testing, master/slave tests, multiple user accounts, and access behind a corporate firewall. Customers only interested in running a handful of tests can instead pay per test. Figure 4.2 shows BlazeMeter’s pricing plans.

Images

Figure 4.2: BlazeMeter pricing.

Users can upload existing JMeter scripts or record them using BlazeMeter’s own Chrome plugin. Tests recorded in the plugin may be executed directly or exported, edited, and uploaded into BlazeMeter at a later time. If desired, URLs can be tested directly with GET and POST requests. Users of Google Analytics, a tool that generates traffic statistics, can connect their accounts, allowing BlazeMeter to extract the website’s data directly and handle test case generation based on historical data. Finally, companies using Drupal, an open-source content management framework, can download the BlazeMeter’s Drupal module, specify the load, and run a performance test without scripting.

BlazeMeter provides a real-time interactive test monitoring dashboard with detailed charts and graphs. The Application Performance Monitoring (APM) system aims to provide detailed application performance to help pinpoint and analyze bottlenecks. Additionally, the APM can integrate with other advanced monitoring tools, including New Relic, to provide end-to-end visibility.

BlazeMeter also works with several continuous integration tools, including Jenkins, Bamboo, and TeamCity.

4.2    CASE STUDY

This section presents a case study that analyzes Sauce Labs, SOASTA CloudTest Lite, and Blaze-Meter against an evaluation framework developed using the top challenges identified over the course of the HPST project. A limited selection of features from these tools is applied to a Java web application built on Google App Engine. The following sections describe the case study in more detail, including objectives, the evaluation framework, the system under test, the exploration of the tools, and an overview of results.

4.2.1   OBJECTIVES

The purpose of this case study is to evaluate TaaS tools against the needs of the industry as identified in the HPST survey. As a reminder, the top five challenges are listed below, but additional information can be found in Chapter 2. These points are used to generate the hypothesis that is available in the next section:

1.  Tester education and training

2.  Need for tools or need for better tools

3.  Lacking, insufficient, or non-existent testing (particularly in the areas of automated, regression, and unit testing)

4.  Generating realistic schedules that partition out sufficient time for testing

5.  Improving communication between the test team and other groups

4.2.2   HYPOTHESIS

Using the aforementioned list of industry needs, five hypotheses were posited. In order to assist in determining whether or not the TaaS tools address the challenges, additional criteria were written based on standard tool evaluation questions and additional related problems discussed in the survey results. The hypotheses are prefaced by an “H” and numbered 1-5 below (e.g. “H.1”), while their related criteria are each uniquely identified by the hypothesis number, followed by the criteria number (e.g. “1.1” for Hypothesis 1, Criteria 1). These codes are used in the evaluation in Section 4.2.5.

H.1: The TaaS tool reduces the need for tester education and training.

1.1 The tool is packaged with extensive documentation and/or training material.

1.2 The provider is available for support.

1.3 The tool can generate test cases for the system under test.

1.4 The tool has sources that aid with interface use.

1.5 The tool eases the transition to new technologies (i.e., new operating environments).

H.2: The TaaS tool satisfies the need for tools or need for better tools.

2.1 The tool is platform independent.

2.2 The tool is operating system independent.

2.3 The tool has customizable reports.

2.4 The tool notifies users of failures.

2.5 The tool has logging and/or debugging capabilities.

2.6 The tool allows data input from external sources.

2.7 The tool is compatible with continuous integration tools.

2.8 The tool is compatible with version control systems.

2.9 The tool is compatible with build drivers.

2.10 The tool is compatible with bug tracking systems.

2.11 The tool has a framework that supports extensibility.

H.3: The TaaS tool corrects the issue of lacking, insufficient, or non-existent testing.

3.1 The tool supports automated testing.

3.2 The tool supports multiple types of testing.

3.3 The tool offers unit testing.

3.4 The tool offers regression testing.

3.5 The tool offers performance testing.

3.6 The tool offers security testing.

3.7 The tool reduces the cost of in-house testing.

H.4: The TaaS tool assists in generating realistic schedules with adequate time for testing.

4.1 The tool efficiently and effectively runs test cases using the cloud’s resources.

4.2 The tool supports agile software testing.

4.3 The tool can run tests throughout the development cycle.

4.4 The tool supports continuous integration.

4.5 The tests provide feedback.

4.6 Test environments can be setup and prepared for test execution in an expedient manner.

H.5: The TaaS tool helps with communication both within the test team and between the test team and other groups.

5.1 The tool has customizable reports that can be consumed by the management team.

5.2 The tool allows developers to identify the source of issues or defects.

5.3 The tool provides a way of sharing results and data between members of the test team.

Each system is walked through and discussed in the next few sections; afterward, the tools are all evaluated against the described criteria.

4.2.3   SYSTEM UNDER TEST

The system under test was developed using Google App Engine (or GAE) [41]. GAE is a PaaS platform that permits the development and hosting of web applications written in languages such as Java, Python, and PHP. Websites and their associated databases, accessed with MySQL, datastores, accessed with NoSQL, or object storage are hosted on a Google-managed cloud. Free but restricted accounts are offered to anyone with a Google account, but fees can be paid for additional resources, including storage and bandwidth.

The GAE Admin Console allows administrators to control various aspects of the application: modifying basic configuration, adjusting performance options, viewing configured services, viewing and administering the datastore, splitting traffic between versions of the application, viewing instances, and monitoring resource utilization and statistics [42]. GAE’s resource utilization charts and graphs were beneficial for comparison with performance testing tools.

Google allows a variety of plugins, permitting developers to work with familiar tools, including Eclipse, Jenkins, Maven, Git, IntelliJ, and more. The GAE’s Software Development Kit (SDK) must be used to develop locally, but is freely available on the website. GAE is scalable up to seven billion requests per day.

For the purpose of this case study, a simple “To Do” application was created in Eclipse Kepler using Java 1.7, based on the tutorial written by Lars Vogel [43]. The front end was a simple JavaServer Page (JSP), written in Java and HTML, with a Cascading Style Sheet (CSS) used to modify the look of the page. The application consists of four servlets to allow for adding and deleting of “To Do” items and for logging in and out. GAE has a built-in method for logging in and out using Google accounts, but an oversimplified custom method was used to have better control when testing. The Java Persistence Application Programming Interface (API) was used to help maintain data in the session using a schemaless object datastore, which is queried using the Java Persistence Query Language (JPQL). The JPQL queries’ syntax resembles that of Structured Query Language (SQL) queries. Code examples for the system under test can be found in Appendix B.

Images

Figure 4.3: To Do application.

4.2.4   ANALYSIS OF TOOLS

The following sections describe experiences with using each tool, detailing the learning process, test creation and execution, and results. As previously discussed, these tools have various methods of use and access; however, not every aspect of the tools is explored for the purpose of the case study. Due to the limited scope of the case study and the small scale of the system under test, an additional subsection references enterprise-level success stories with each tool.

Sauce Labs

Getting started: From the Sauce Labs homepage, new users can quickly sign up for an account. Once logged in, a dashboard displays a table that will contain the executing/executed tests and information about the account itself, including account type, number of parallelized tests permitted, and number of minutes remaining. In the left module, the access key is also provided, which is used for connecting the externally generated tests to the user’s account. A screenshot of the dashboard is shown in Figure 4.4. A closer look at the dashboard is provided later in this chapter.

Sauce Labs offers two general methods of creating tests. The first is to write the tests from scratch and run them through a build driver, such as Maven. The other is to record tests using Selenium Builder [44]. With no prior knowledge of writing tests, Selenium Builder is the easiest option. Simply install the Firefox plugin, click on the logo in the bottom right corner of the browser, and start recording. Instructions are available on both the Sauce Labs and Selenium Builder’s websites. Writing tests from scratch requires more knowledge and can become complicated depending on experience.

Images

Figure 4.4: Sauce Labs dashboard.

Fortunately, Sauce Labs provides extensive documentation to help. Tutorials exist for Selenium 1 and 2 using Java, Python, Ruby, and more languages; JS Unit; and various continuous integration tools. The tutorials and the examples they provide are fairly simplistic in their testing capabilities, though, and additional research on unit testing and the scripting language of choice is likely required for those with no experience.

Writing tests from scratch: Having experience with JUnit (albeit no experience with Selenium or Maven), Java was selected as the language for writing unit tests. Using the tutorials provided by Sauce Labs and some additional reading into Maven and Selenium 2 (WebDriver), two tests were written. The first performs a test on the basic functionality of the system under test in a single environment (Windows 8 with Firefox version 26), while the second performs a simple test that verifies the title of the system under test under six separate environments to explore parallelization:

•  Windows XP with Firefox version 26

•  Windows 8 with Internet Explorer version 10

•  Mac OS X 10.6 with Safari version 5

•  Linux with Chrome version 30

•  iPhone iOS 10.9 with Safari version 7 using portrait orientation

•  Android tablet with Android browser version 4.0 using portrait orientation

A simple example of a Selenium 2 test case written using Java is shown below. This example is offered by Sauce Labs, but has been commented to help with understanding the functions. Due to their length, the code for the tests written for this case study is available in Appendix C. The Maven information for running these tests is also available there.

Images

Writing two tests from scratch while learning Selenium, Maven, and Sauce Labs and working part-time took approximately one week to complete with some trial and error. While not an overly long period of time for an experienced developer, the time required to complete a single test, especially for a more complex system or a tester with less development experience, could grow significantly. For these scenarios, writing a test script using Selenium Builder offers significant advantages.

Working with Selenium Builder: Selenium Builder is an open source tool for writing Selenium 1 or 2 test scripts by recording actions on a Firefox browser. The Firefox plugin can be installed from the Sauce Labs website. Once installed, the tool is available from the green LEGO®-like brick at the bottom right of the browser, as seen in Figure 4.5. Clicking on the brick brings up the Selenium Builder tool in a separate window, shown to the right in the figure.

Images

Figure 4.5: Selenium Builder tool.

To start recording, simply select the button showing the version of Selenium desired and then perform actions on the browser showing the system under test. At any point, verifications can be recorded by selecting the “Record a verification” button and then clicking on the field that needs to be verified. Various steps can be recorded manually in the tool, including navigation, input, assertions, verifications, waits, stores, and even miscellaneous items including printing, screenshots, and working with cookies. Figure 4.6 shows a recording in progress.

Images

Figure 4.6: Selenium Builder recording in progress.

Once the test is completed, simply press the “Stop recording” button. From here, tests can be saved individually or added to a suite, run locally or on Sauce OnDemand, or debugged. Figure 4.7 shows the prompt for running parallel tests in Sauce Labs from Selenium Builder. The following section discusses the Sauce Labs dashboard. The information in that section applies to both the manually written and recorded tests.

Images

Figure 4.7: Running tests from Selenium Builder.

Viewing the results: The Sauce Labs dashboard shows all tests that have been, or are in the process of being, executed. Figure 4.8 shows a closer look at the lists of tests displayed in Figure 4.4. For each test, the session, the environment information, the results, and timing information are displayed. In the example, the session name and results were set in the code (shown and commented in Appendix C), where the pass or fail is determined by the assertions—if all assertions pass, the results declare the test to be passed.

Images

Figure 4.8: Sauce Labs test table.

Clicking on a session link will bring up a page dedicated to that test, as shown in Figure 4.9. The different tabs are explained below:

•  Commands: The Commands tab shows every command that was called during the script, including information about the input or verifications. At each step, a screenshot was automatically taken, which is viewable on the right.

•  Screencast: The Screencast shows a recording of the test, allowing users to watch the test playback from start to completion.

•  Selenium log: The Selenium log is available for the test execution.

•  Metadata: The Metadata tab shows all metadata associated with the test. Additionally, various logs and media are available for download. In this case, a Selenium, Sauce, and Firefox log are available, along with one video and 13 screenshots.

Images

Figure 4.9: Sauce Labs test results.

Sauce at the Enterprise Level: Sauce Labs has been successfully integrated at the enterprise level. Several case studies are available on their website, but the benefits for a select few are listed below:

•  Mozilla [45]: With 18 projects and 80 suites, Mozilla can more reliably and quickly run thousands of tests each day with less stress. Test suites that took 8 minutes to run locally take only 2 minutes to run on Sauce due to parallelization. With this in mind, Mozilla expanded their testing to include more browsers and more operating systems.

•  Okta [46]: Okta is a leader in enterprise identity management, serving 300,000 people through its cloud-based system. Okta was previously using local Selenium tests, but migrated to Sauce Labs and saw immediate benefits. Key improvements include a massive reduction in time taken to run the core test suites (from 24 hours to 10 minutes), a massive reduction in time taken to debug tests (from 3 days to 3 hours), and a massively expanded scope thanks to running hundreds of tests in parallel.

•  Eventbrite [47]: Eventbrite immediately noticed improvements by switching to Sauce Labs in terms of development, testing, and debugging time. They connected to Jenkins to provide continuous integration support and run a smoke test job containing 20 tests as a sanity check with every build. When successful, the smoke test is followed by a suite of 700 Selenium tests using 35 concurrent threads focusing on Chrome, Firefox, and Internet Explorer. They claim that stability has never been better.

Other corporations using Sauce Labs include Yelp, Dropbox, BBC, Adobe, and Travelocity.

SOASTA CloudTest Lite

Getting started: On the SOASTA homepage, new users can sign up for a free account. Once the required information is submitted, an email with instructions is sent to the user. SOASTA provides a CloudTest Lite virtual machine image and states that the minimum hardware requirements are 4 GB RAM (with virtual machines requiring 2 GB), a 64-bit processor, and 20 GB of free disk space. The image can be installed in VMware Player on Windows and Linux or VMware Fusion or Parallels on Mac OS X. Alternatively, the image can be deployed directly onto a VMware ESX, ESXi, or vSphere environment using the Open Virtualization Archive (OVA) package provided. SOASTA’s 64-bit virtual machine is built on the CentOS (“Community Enterprise Operating System”) Linux distribution, but no Linux experience is necessary to use CloudTest Lite.

For this case study, VMware Player was installed on a Windows 8 machine that exceeded the minimum software and hardware requirements. The VMX file (the primary configuration file for a virtual machine) was opened in the Player (Player > File > Open…) to start CloudTest Lite. For Intel machines running a 64-bit virtual machine, Intel Virtualization Technology must be enabled in the BIOS; otherwise, VMware Player may throw an error or will simply display a black screen and close.

Once the virtual machine is running, the user must enter a license activation key. At this point, the screen shown in Figure 4.10 should be displayed. Navigate to the given web address on the local machine’s browser to access the CloudTest Lite dashboard. The required username and password is also given in the instructional email. Once logged in, the dashboard shown in Figure 4.11 should be displayed.

Images

Figure 4.10: CloudTest Lite in VMware Player.

SOASTA CloudTest Lite’s dashboard contains a significant amount of information, which may be overwhelming to first-time viewers. The bottom half of the center module offers an abundance of information, including forums, a knowledge base, training videos, documentation, and support. SOASTA’s system is not the most intuitive, but they do provide an incredible amount of resources to help new and experienced users. Additionally, users should occasionally receive emails about seminars, including “Getting Started” webinars.

Images

Figure 4.11: CloudTest Lite dashboard.

While the basic steps to creating and running a test are described below, this case study was performed after watching several training videos on recording and playing test scripts using CloudTest Lite. These videos were extremely helpful, walking users through the tool and clearly explaining the actions being performed. They are highly recommended for learning CloudTest Lite.

Creating tests: To enable recording, users must first install the SOASTA Conductor, which is available in the right module of the dashboard under “Downloads.” Once installed, the Conductor should automatically start and appear in the Window’s taskbar, located near the date and time, with the following icon: image. If the Conductor did not start automatically, right click on the icon and click “Start.”

Once the Conductor is running, select the type of test to create, using the links provided in the center module. In Figure 4.12, the steps are shown for creating a performance or load test. First, a recording must be created and saved. A completed recording can be converted into a sequenced test clip. Saved test clips can be combined to form a test composition, which are executed to perform testing.

Images

Figure 4.12: CloudTest Lite Performance Test steps.

To begin, click the recording option. In the Target Definition Wizard, select the type: HTTP, SOAP with a WSDL URL, WebUI/Ajax, Native/Mobile App, Existing Target(s). In this case study, an HTTP recording was used. On the next screen of the wizard, fill out the target name, the location (URL), and any required authentication information to continue. Upon completion, the Test Clip screen should be displayed with an option to record new scripts. At this point, SOASTA will verify that a Conductor is running on the local machine.

Once the recording begins, the Conductor will record all actions that send data across the network. For this reason, SOASTA recommends closing any programs that access the internet. All actions performed against the system under test should be done using a separate browser, such as Firefox or Chrome, that has first been cleared of all history and data and set to the “about:blank” page prior to recording. As actions are recorded, they appear as icons across the page. Each icon can be clicked on to view the request and response headers and bodies. The response body can be viewed as an HTML page, although no style sheet is applied. A completed recording is shown in Figure 4.13.

Images

Figure 4.13: CloudTest Lite recording.

Once the recording is complete, the steps can be converted to a test clip. As seen in Figure 4.14, the test clip screen displays the individual steps that were performed. At this point, the script can be modified. One option is to add validations to the script, allowing the verification of data and causing script failures under certain conditions. Another is to generate or import test data. When the test script is prepared for use, it should be saved.

In the test composition screen, saved tests can be dragged to various tracks. The track is sequenced, causing tests to run one after another. The various tracks can run concurrently. By clicking on the icon in each track, the number of users can be set. Clicking on the icon within each clip allows the number of repetitions to be set. Ramp up times and pacing can be set within the properties for each clip. A test composition is shown in Figure 4.15. Three instances of the same test clip are used, two running in parallel and one running sequentially, each with varying numbers of users and repetitions. Once the composition is complete, the tester can press the play button and execute the script.

The process for creating and executing the first test took longer in SOASTA than the other tools examined. A few hours were required due to the complexity of the tool and the need to watch the educational material.

Images

Figure 4.14: CloudTest Lite test clip.

Images

Figure 4.15: CloudTest Lite test composition.

Running the test and viewing results: During test execution, various dashboards are available. The Results tab, shown in Figure 4.16, displays the steps as they are called and offers basic information for each action, including duration, average response time, bytes sent and received, and throughput. This page can be modified to include a large variety of widgets, as seen in the left module. Additionally, preconfigured or custom dashboards can be added in new tabs, such as SOASTA’s Load Test Summary shown in Figure 4.17.

Images

Figure 4.16: CloudTest Lite test results.

Images

Figure 4.17: CloudTest Lite load test summary.

Google App Engine has its own charts showing usage over time. Figure 4.18 shows a summary of requests on the To Do application during CloudTest Lite’s test execution. As expected, this graph of total requests over time closely follows SOASTA’s Send Rate graph shown in the top right of Figure 4.17.

Images

Figure 4.18: GAE Request Graph—CloudTest Lite.

SOASTA at the Enterprise Level: SOASTA CloudTest is built for enterprise testing and has been successfully used to test massive undertakings including the following:

•  London 2012 Olympics [48]: Six months prior to the 2012 Summer Olympics, the website team began working with SOASTA in hopes of finding all possible bottlenecks to prevent catastrophic situations that would impact the user experience and the London 2012 brand. SOASTA performed over 500 tests on the site and mobile apps, simulating over 400,000 concurrent users from across the globe. The team stated that they would have required hundreds to thousands of servers and weeks of setup to perform a single test, but they instead simulated 100,000 users within minutes. By the end of the Games, the following totals were reached:

º  1.3 PB of data served

º  1.7 billion object requests

º  46.1 billion page views (HTML/HTMX)

º  At its peak, 104,792 page views per second on the web and 17,190 page views per second on the mobile application

•  Microsoft [49]: SOASTA assisted Microsoft in testing its Windows Azure Platform in 2010. The customer site, Office.com, was tested using 10,000 concurrent virtual users with domestic network latency. The test planning took only three days, while the tests themselves took only three hours. SOASTA managed the testing process while Microsoft employees used SOASTA’s OLAP engine to observe the real-time results. Office.com engineers agree that they spent far less time testing and were able to count on higher reliability thanks to the scaling capabilities of CloudTest.

•  The Kentucky Derby [50]: The United States Thoroughbred industry faces increased technological challenges online due to internet wagering, internet betting, viewing, and participation. Due to major spikes in traffic, Churchill Downs, Inc. decided to test website performance prior to Derby Day to better understand the performance and scalability of their own infrastructure, seek improvements, and achieve 10–12,000 HTTP hits per second with optimal response times. Their existing performance testing practice used open source tools with only two web servers. They successfully sought help from CloudTest to emulate anticipated volume from outside their firewall with little time remaining prior to race.

Other SOASTA users have included Activision, SAP, Lenovo, Intuit TurboTax, and more.

BlazeMeter

Getting started: Of the TaaS tools examined, BlazeMeter was the simplest to understand and use, especially when utilizing the Chrome extension to record tests. To create an account, users only need to enter their email, password, and name. Once logged in, the extension’s information can be found on the “GET and POST Requests” section of the dashboard, which offers all the instructions required to use the tool. The extension can be installed from the Chrome Web Store at no cost.

Recording tests: Once installed, the BlazeMeter icon appears on the right of the address bar in Chrome. Click on the icon to setup and start the recording. The tool, shown in Figure 4.19, allows users to set a name, the concurrency, the load origin (Ireland, Virginia, Northern California, Oregon, Singapore, Sydney, Tokyo, or San Paulo), and other options. Once the record button is pressed, all actions within the browser are saved. When no further actions are required, press the stop button to end the recording.

Images

Figure 4.19: BlazeMeter Chrome extension.

If the recording is satisfactory, press the “.jmx” button to convert and save the script as an Apache JMeter file. If the recording requires editing, click the pencil button instead to view the Editor, which is shown in Figure 4.20. The Editor allows scripts to be exported as a JMX or JSON file. JMeter files can be uploaded into BlazeMeter. Using the BlazeMeter plugin, a simple test was written and prepared for execution within minutes.

Images

Figure 4.20: BlazeMeter script editor.

Running tests and viewing results: Using the BlazeMeter dashboard, test scripts can be uploaded in the Apache JMeter section, shown in Figure 4.21. Once uploaded, changes can be made to the number of concurrent users, the ramp up period per thread group, the number of iterations per thread group, and the duration per available thread group. When the test configuration has been set, the information can be saved as a test, which becomes available in the “Tests & Reports” page. Pressing play can run the test, shown in Figure 4.22. From the reports screen, real-time test results are displayed. Upon completion, an email is sent to the account owner to notify them that the test has completed its execution.

Images

Figure 4.21: BlazeMeter JMeter test configuration.

Images

Figure 4.22: BlazeMeter test.

Reports are available on the “Test & Reports” page. Various graphs are available, comparing aspects of the test results. Results can also be compared against the results of previous tests. Examples of load results and performance monitoring graphs are shown in Figures 4.23 and 4.24, respectively. In Figure 4.23, the thicker lines represent the current test, while the thinner lines are the previous test results. Errors and JMeter logs are also available. The errors tab is shown in Figure 4.25. In this case, errors were caused by using all of the datastore read operations permitted by the free version of Google App Engine in a single day, locking down the application until the resources reset. Figure 4.26 shows the GAE dashboard for this time period. Plugins, such as New Relic, would also be displayed here if they were configured.

Images

Figure 4.23: BlazeMeter reports—load results.

Images

Figure 4.24: BlazeMeter reports—monitoring.

Images

Figure 4.25: BlazeMeter reports—errors.

Images

Figure 4.26: GAE Request Graph—BlazeMeter.

BlazeMeter at the Enterprise Level: Many groups use BlazeMeter at the enterprise level, including Adobe, BBC, Citibank, RE/MAX, Massachusetts Institute of Technology, and Nike. While they do not yet have case studies available on the website, they do post a selection of testimonials. Three are repeated below, all from the testimonials site [51]:

•  “BlazeMeter allows us to effortlessly load test at large scale capacity. We are able to run 50,000 simultaneous user tests on a weekly basis and always head into any large-scale event with absolute confidence.”—Shlomo Rothschild, Director, Divisional Information Security and Architect at InterCall

•  “BlazeMeter enabled us to get test results fast and effortlessly. Waiting to provision dozens of test servers and manage the intricacies of distributing large scale load tests is too costly and simply unrealistic in traditional IT environments.”—Mike Valenty, Technical Director, Double Jump Games & Maker of iWin Slots

•  “After evaluating various load testing tools, BlazeMeter’s solution was the obvious choice. I found it professional, scalable and most importantly—simple to use.”—Shay Finkelstein, CEO, Dataphase

4.2.5   RESULTS OVERVIEW

The evaluation framework’s results are displayed in Table 4.2. The criterion is derived from the hypothesis explained in Section 4.2.2. The results are primarily pass or fail, where a checkmark denotes a pass and a blank cell denotes a failure. A pass means that the criterion is satisfied for the tool, while a failure means that the criterion was not met. In some examples, an asterisk is given. The asterisk implies that the criterion is only partially met. Each partial pass is described after the table of results.

Table 4.2: Evaluation results overview

Images

Images

Three criteria had partial passes given. Each is described below:

•  [1.3] The tool can generate test cases for the system under test: Of the three tools, only BlazeMeter has the option for the fully automated generation of test cases (using Drupal or Google Analytics). However, all of these tools offer assisted test case generation in the form of test recording and playback.

•  [2.3] The tool has customizable reports: By default, Sauce Labs will only state whether a test has completed its execution—not whether it passed or failed. Verifications and assertions will only be displayed in the table of results if the information is passed from the script, requiring some programming. Updating the script to contain this information is fairly simple, though, and Sauce Labs does provide a great deal of detail with the screenshots, videos, and logs.

4.2.6   THREATS TO VALIDITY

While this case study attempted to provide a view of several TaaS tools used in realistic scenarios, several factors could challenge the results. For example, the hypothesis may be incomplete. An incomplete hypothesis may cause the results to only apply to certain scenarios, beyond those evaluated in this case study or those discussed in the case studies from enterprise-level users. Individuals and teams may have a very specific set of needs, which may or may not be met despite the general requirements being satisfied.

Furthermore, the tools selected may not meet the requirements of the prospective users. While unit and performance testing were both considered being “in demand” according to the HPST survey, the teams open to TaaS tools may instead require other forms of testing or prefer that the tools be packaged with more features.

Not all features of the tools discussed were examined and evaluated. These tools have many features, not all of which could be examined during the course of this case study due to time constraints and restrictions on the number of tests that could be run per month on the free plans. These other options were described based on their advertised features, which may or may not be entirely accurate in terms of offerings or quality.

Several of the criteria require subjective responses, particularly in the area of education and training. These determinations were made as objectively as possible, based on experiences within this case study and the enterprise-level case studies. Nevertheless, this leaves room for differing opinions or error.

4.3    GAP ANALYSIS

This section provides a gap analysis based on the needs of the industry and the capabilities of the TaaS tools. The TaaS tools are discussed together, with examples of strengths and weakness from each being cited. Additionally, examples from the academic literature are included to showcase potential improvements and solutions that are already being researched and developed.

4.3.1   UNSATISFIED HARD PROBLEMS

As shown in Table 4.2, the TaaS tools satisfied the majority of the criteria. Nevertheless, improvements can still be made in several areas.

Fully automated test case generation

While all three tools examined allowed the recording and playback of tests, only one offered the ability to generate test cases automatically. The issue of tester education and training could be greatly reduced by building tools with the ability to create tests without any assistance, even if the users need to supply test data. For areas such as functional, regression, and unit testing, this may not be feasible. However, load, performance, and security testing tools should have some automatic testing capabilities, even if requiring additional plugins to function. BlazeMeter’s incorporation of Google Analytics and Drupal to create tests based on previous usage greatly simplifies load testing for users with less experience in testing or with heavily restricted schedules.

Compatibility with external tools

Allowing the TaaS tools to integrate with version control systems, build drivers, and defect tracking systems can simplify software development, testing, and configuration management, creating a smoother overall process. The ability to integrate with continuous integration tools like Jenkins, especially considering Jenkins’s plugin library, is a great start. Working with version control systems could allow tests to be rerun whenever a developer checks in code, catching potential issues at the earliest point in time. Integration with build drivers could allow users of tools such as Ant and Maven the ability to verify that every build passes the scripted tests before moving builds into test or production environments. Compatibility with defect tracking systems could allow users to create bug reports directly from test results or perhaps even allow the automatic generation of bug reports during testing.

More types of testing

The ability to perform multiple types of testing within one tool would be a major draw for users. TaaS tools may be affordable, but combining several tools for the various aspects of testing would cause testing to, once again, become an expensive investment. Some of the tools already provide various types of testing, including SOASTA and BlazeMeter. However, the types of testing in greatest demand, according to the HPST survey, were unit, regression, performance, and security testing. While some companies claim to handle both performance and security testing, testing all four as a service would require, at minimum, three separate tools.

Greater extensibility

All three tools describe some form of extensibility, typically through access to Jenkins and, by extension, the Jenkins’s plugin library or a selection of external tools, such as Selenium Builder for Sauce Labs or New Relic for BlazeMeter. While these options increase the capabilities of the existing system, promoting even greater extensibility could provide users with even more features with less impact to the existing capabilities of the tool and less pressure on the vendor.

4.3.2   CAVEATS

There are several caveats related to the testing challenges identified in the HPST survey and the capabilities of current TaaS tools to satisfy them.

Training and education

While fully automated test case generation and extensive documentation and guides may assist those with limited experience, no amount of assistance can make up for a complete lack of testing knowledge. Despite tool capabilities, testers need to remain educated on various aspects of their work, including the application under test, testing techniques, weaknesses of software, risks of platforms or software, how programs fail, and more.

Need for tools/better tools

Most of the tools examined focus on testing web and mobile applications. Legacy desktop applications may not be testable using services at this point in time, although some academics, such as Candea, discussed testing services that accepted binaries and executables [17].

Additionally, the nature of web applications makes it difficult, if not impossible, to test the underlying code with full coverage. For example, programming languages like Java require code to be compiled into class files before deploying the application. Testing the individual functions is not possible through the class files. While much of this code should still be testable from the interface, unit testing may be more difficult in the traditional sense. On the other hand, Kosmatov’s proposed structural unit testing service [24] could provide true unit testing with fuller coverage, if implemented.

Generating realistic schedules

TaaS promotes more efficient testing through the creation and execution of automated tests that can easily be rerun throughout the cycle. This should reduce the time required for testing and prevent testing from falling to the very end of the project. However, TaaS cannot help with generating the schedules themselves—just providing more time for testing to be performed.

Improving communication

The easy to read and understand dashboards should be consumable by all members of the team, allowing for better communication between the test team and other groups. The information should allow developers to quickly pinpoint issues, while easily showing management relevant statistics and project trends. At the same time, the concerns of information sharing within the team still remain. Knowledge sharing between expert and novice employees may not be easier, but the learning process should be simpler overall.

4.3.3   ADDITIONAL CONCERNS

The academic literature provides several additional concerns that should be considered when examining the shortcomings of the TaaS tools. The challenges and needs were described in detail in Section 3.2.4. Several appear to be managed by the examined tools, such as maintaining availability, creating innovative solutions, and building trust. However, others, such as offering transparent pricing, are a bit lacking. For example, SOASTA has no pricing available on their website for the paid version of CloudTest—an issue that also applied to the majority of tools described but not included in the case study.

Another major concern is security—of the tests, of the system under test, the test data, and the results. The decision to work with a third party always creates security risks. In the case of TaaS, risks are further increased by utilizing the cloud, especially public clouds. TaaS companies make security a top priority with private accounts, secure services, data privacy guarantees, and clean installs of operating systems that are destroyed after test runtime. Nevertheless, concerns still exist about protecting proprietary information.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.182.221