…software defect removal is the most expensive and time-consuming form of work for software.
Caspers Jones
When building an API product or platform, it is important to have an API testing strategy established. Selecting the right approach for API testing contributes to the success of an API program’s supportability. It also contributes to faster delivery while avoiding one of the costliest aspects of software development: defect removal. Finally, it offers another perspective on the developer experience of an API because of the consumer-oriented nature of automating testing.
Acceptance testing, also called solution-oriented testing, ensures that the API supports the captured job stories. It seeks to answer the following questions:
■ Does the API solve real problems that our customers have?
■ Does it produce the desired outcomes for the jobs to be done?
Acceptance testing verifies the collaboration of API operations required to achieve a desired outcome. Composing acceptance tests uses only the API interface to verify that the system meets all expected end-to-end functionality. The internals of the API can and likely will change over the course of development, but this should not affect the results of the acceptance tests.
Acceptance testing is the most valuable style of testing for an API. Writing acceptance tests help identify poor developer experience for a single API operation or across the end-to-end integration. It is where the most testing effort should be spent, after code testing, when limited time is available.
Each week, a new headline appears that indicates a company has been hacked and private information exposed. Security is a process, not a product, and a continual one at that. Security testing aims to answer the following questions:
■ Is the API protected against attacks?
■ Does the API offer opportunities for sensitive data to be leaked?
■ Is someone scraping my API and compromising business intelligence through data?
While not typically associated with automated testing, security testing is an active process that includes design time review processes, development time static and dynamic code analysis, and run time monitoring.
Design time and development time security testing is often made up of policies and tools that are designed to prevent leaking sensitive data through design reviews that identify potential concerns. It also includes authorization policies for each API operation to ensure proper access is enforced.
An API management layer may be employed to apply run time monitoring and enforcement. Authorization enforcement is managed through configuration, avoiding the need to implement access restrictions within the API implementation. Log analysis may be used to detect and block malicious attacks. More details on security protection are offered in Chapter 15.
APIs can and often do provide the primary interface for applications to interact with a system. By playing the role of a dependency, it is critical for the API service to be available, whether that is to other services which are internal to an organization, or to external partners and customers. Additionally, there may be service level agreements (SLAs) that the company has agreed to undertake with customers and partners regarding the performance and uptime of an API. Failing to meet SLA could yield a negative financial result, in addition to the prospect of angry or upset customers.
Operational monitoring answers the following questions:
■ Is the API available and performing as expected?
■ Is the API staying within expected SLAs?
■ Is there a need to provision more infrastructure to meet performance goals?
Monitoring and analytics solutions are an important component to API operational monitoring. Analytics will verify that real-world usage matches what was seen in testing for both correctness and performance. Analytics measurements can be as simple as logging of performance counters, or as complex as integrating third-party libraries with extensive monitoring and visualization support.
API contract testing, sometimes referred to as functional testing, is used to verify that each API operation meets the expected behavior and honors the API’s defined contract for the consumer.
Contract testing answers the following questions:
■ Is each operation working to the specification for all success cases?
■ Are input parameters being followed? How are bad inputs handled?
■ Are the expected outputs received?
■ Is response formatting correct? Are the proper data types used?
■ Are errors being handled correctly? Are they reported back to the consumer?
In the ADDR Process, API descriptions are defined during the design process, prior to implementation. These description files may be used to verify the API contract as part of the contract testing process. Some common contract specification formats for REST APIs include OpenAPI (Swagger), API Blueprint, and RAML. GraphQL APIs will have a schema defined, which will help drive contract testing. gRPC APIs define service contracts using an IDL file. This topic is discussed further in Chapter 13.
API contract testing must first ensure the correctness of each API operation. Handling thousands of clients per minute does no good if the information that the API is providing or acting on does not meet the API’s specification. Identifying and eliminating bugs, hunting out inconsistencies, and verifying that an API meets the spec against which it has been designed all fall under the umbrella of testing for correctness.
Next, API contract testing must focus on reliability. The API should provide the correct information every time an operation is called. Executing the same action repeatedly for an API operation designed to be idempotent should produce the same results. API operations that support pagination should page through results in a predictable way.
Finally, API contract testing should submit invalid and missing data and verify that the expected error response is received. String values may be submitted in place of numeric values, along with values outside the range of acceptable values. Date formats should be incorrect or result in dates outside an expected range of acceptable dates.
User Interface Testing vs. API Testing
Some team members may suggest that building dedicated tests for an API is wasteful. They may attempt to make the case that user interface tests cover the API sufficiently, since the UI calls the API. However, this is not the case. Instead, the UI only tests the API only as far as the UI exercises the API. This means that if the UI is performing client-side validation of user input, then UI tests would never verify the API’s ability to handle bad data.
While some may say that this level of testing is sufficient, they may be forgetting the recommendation of OWASP: do not trust user input. A user or client will not always submit data in a way that an API will expect. Always validate the data that comes from forms as well as HTTP request headers.
One of the goals of API testing is to ensure that the API is able to handle a multitude of good and bad values that may be submitted outside of a specific user interface. If we depend only upon UI tests, then the API should not be considered sufficiently tested.
Another goal of API testing is to ensure that the API cannot be deployed into production without passing tests. This requires that API tests become part of the CI/CD pipeline, just like all other types of automated testing.
Some organizations may have an established quality assurance (QA) group that specializes in testing automation and manual exploratory testing. QA teams may be comprised of those that write code and others that use testing tools that help compose test automation suites without the need to write code. Other organizations may not have dedicated QA teams at all, instead relying upon developers write and maintain API test code. Selecting API testing tools must take these factors into consideration.
There are a number of open source and commercial testing tools available today that support the creation of API testing using API specification formats to help jumpstart the testing process. Some are designed to support the creation of tests through a user interface to reduce or eliminate the need to write test code. Others are designed to offer a scripting environment or test libraries that require coding. Be sure to select the right solutions that match the testing preferences and skills found in the organization.
Performance and monitoring solutions are available from third-party API monitoring-as-a-service solutions are available from a range of companies, and often start as a freemium service for a small number of tests. Open-source monitoring tools are available that can be run on on-premises or cloud-hosted infrastructure. Custom tools built to perform load and performance testing can be modified to run less frequently and at a smaller scale for the purpose of monitoring or soak testing.
API testing is often automated through code or test scripts and executed in a dedicated test environment. Automating these tests has a higher infrastructure cost due to the need for additional non-production environments that contain infrastructure resources. Be sure to take into consideration how tests will be automated, and the infrastructure cost required to support them.
Finally, consider how test-driven development may be extended through the strategic selection of API testing tools. Dedicated QA teams may build automated test suites that can be executed by developers as they implement the API. Developers that are tasked with writing the API tests themselves may wish to take a similar approach, much like they apply TDD to their day-to-day development process. This helps to demonstrate progress and validate that an API implementation handles all success, invalid, and error cases.
One of the challenges that must not be overlooked when establishing an API test strategy is the need for test data sets. While unit testing may not require complex data sets, API testing has the exact opposite demands. API testing often involves a tremendous amount of effort to build a cohesive set of data that will support the necessary test cases.
There are two common approaches to creating test data sets for APIs: snapshot of existing data sets and cleanroom data set creation. Taking a snapshot of a production system and cleansing the data set of sensitive data is often the most direct path. It requires less effort to try and separate the necessary data, instead opting to accept an entire data store snapshot as a starting point. The snapshot may be used to restore the test data back to a known state. This is a great approach when existing production data exists.
Cleanroom data set creation is a bit more challenging and takes considerable time, but once completed it enables more robust test cases. Cleanroom data involves the creation of cohesive data sets from the ground-up to support the API testing process. Tools such as Mockaroo may be used to synthesize some of the data while providing more real-world values than simply using random values. However, handcrafting data elements is often required to construct deeply nested data sets that represent entire scenarios rather than just a single table of data.
For example, JSON’s Bookstore would require books, carts, orders, and customers that are not easily generated randomly. Instead, it is often necessary for domain experts to construct these elements manually, perhaps using a spreadsheet. A script then loads this data into the appropriate data stores, ensuring the elements are properly connected through shared identifiers, foreign keys, and link tables. Tests could then use the API to retrieve a customer, examine their orders, execute a new shopping experience, and verify that the API functions as expected.
Some API testing may depend upon third-party services that do not offer their own sandbox or test environments. In this case, techniques such as API mocking may be used to isolate external dependencies and prevent the need to involve production systems as part of an API test suite. Rather than directly connecting to the system, a mock response may be created to take the place of the system. Of course, this often requires additional data preparation work to ensure that the mock data will properly satisfy the use cases that are to be supported.
Too often, teams choose to take shortcuts when time is short, and this typically involves poor or no API testing. Like documentation, testing is often seen as a nice-to-have in the development process. However, we should view testing and documentation as essential steps to truly calling the API done and ready to deploy. Otherwise, we are creating opportunities for bugs to creep into partner and customer interactions. Worse, it could open the organization up to malicious attacks through one or more APIs.
A robust API testing strategy is an important step to API delivery and is a formidable foe against regressions sneaking into an API. A proper API testing strategy will help to ensure API correctness and reliability while ensuring the desired outcomes are achievable. It should also extend beyond the development phase and into runtime testing to maintain a secure and performance environment. An API should not be considered complete until all tests have been created, executed, and passed.
35.170.81.33