In order to deploy a RESTful web service in a commercial environment, a number of criteria must be met. One of these criteria is performance. Besides yielding the correct result, RESTful endpoints must do so in a timely manner. This chapter discusses how these concerns can be addressed in real world web services. Performance optimization techniques can be applied to different aspects of a web application. However, in this chapter, we will focus on the RESTful (web) layer. Chapter 10, Scaling a RESTful Web Service, explores techniques that apply to other aspects of web applications. The following topics will be covered in the next few pages:
Last-Modified
/If-Modified-Since
headersTo illustrates these techniques, we will build the room availability component of our sample property management system web service
While communicating with a remote service, an unavoidable amount of time is spent sending and receiving data over the network. To reduce the network latency from an application's point of view, service designers can ensure that the number of round trips is kept to a minimum. This is the subject of the remainder of this chapter. For now, however, let's take a look at another technique that can be employed to reduce the amount of data that is sent across the wire. The HTTP specification defines a mechanism for applying compression algorithms over responses before being transmitted to clients.
HTTP compression revolves around content negotiation between the two parties (the server and the client). The client must notify the server about what compression algorithms it supports. Typically, these would be deflate and gzip. Clients do so by adding the following header to requests:
"Accept-Encoding": "gzip, deflate"
If the server supports one of these compression schemes, it can apply the scheme to the outgoing data. If the data is compressed, the server should add the following header to responses:
"Content-Encoding": "gzip"
With that information, the client is able to process response data appropriately.
Other compression schemes can be used with HTTP, but gzip and deflate are the most common. So, the question is which one should service designers prefer? Unfortunately, there is some confusion about namings in the HTTP specification. Deflate and gzip actually use the same compression algorithm. gzip is a data format that leverages deflate (the algorithm) for compression. In the context of HTTP, deflate refers to Zlib, which is another data format that uses deflate. In technical terms, the deflate scheme offers better performance but is less widely supported than gzip. Therefore, gzip is the more prevalent choice for HTTP compression.
Now that we know gzip is the more prevalent choice for HTTP compression, we want to support gzip compression in our RESTful web service. Let's take a look at how this can be achieved.
In general, it falls under the responsibility of the servlet container (for example, Tomcat, Jetty, JBoss, and so on) to deal with compression. You should refer to the documentation of these containers for details on enabling compression.
If the web service uses Spring Boot and runs on either Tomcat or Jetty, enabling gzip compression is as easy as adding the following two properties in application.properties
:
server.compression.enabled=true server.compression.mime-types=application/json
The former property turns compression on, whereas the latter ensures that compression is applied to the JSON content.
18.223.107.85