Best practices for REST-based microservices

In this section, we'll discuss a few best practices that make your MSA developer-friendly, so they can manage and track errors easily:

  • Meaningful names: It's always important to provide a meaningful name in the request header, so if any problem, such as performance degradation, memory wastage, or a spike in user load, occurs, developers and performance engineers can easily understand from which microservice this request was originated and cascaded. It's therefore a best practice to provide the logical name/{service id} in the User-Agent property in the request header, for example, User-Agent:EmployeeSearchService
  • API management: In the REST-based microservice architecture, one microservice accesses another microservice via an API. The API acts as a facade to other microservices. Therefore, it's mandatory to build APIs carefully, and changing APIs entails additional problems. That is, APIs have to be designed with future demands in mind. Any changes in the API method signature aren't good because many microservices depend on the APIs to access and assess the microservice. Therefore, the tasks, such as API usage, versioning, and management, acquire special significance in our increasingly API-centric world.
  • Correlate ID: Microservices, for the sake of guaranteeing high availability, are typically spread across multiple servers. That is, there can be multiple instances of the same microservice. With containers emerging as the most optimized runtime for microservices, running multiple instances of a microservice has become the new normal. To fulfill one client request, the control has to go through multiple microservices and instances. If one service isn't doing OK in the pipeline due to a problem, we need to understand the real situation of the service to determine our course of action. The aspects of service tracking and distributed tracing gain importance for the microservices architecture to be successful and smart in the connected and cloud era. The widely-recommended mechanism is always to generate a random UUID for every client request and pass that UUID to every internal service request. Then, by capturing the log files, it becomes easy for service operators to pinpoint the problematic service.
  • ELK implementation: Microservices are small and simple. In any IT environment, there can be hundreds of microservices, and each microservice has multiple redundant instances in order to ensure the much-wanted fault tolerance. Each instance generates a log file, and administrators find that visiting each log file to locate something useful is not an easy affair. So, capturing and stocking log files, implementing a powerful search engine on the log file store, and applying appropriate machine learning (ML) algorithms to that log data in order to extract and emit any useful patterns, noteworthy information, or beneficial associations are vital in order to make sense of the log data. ELK, which is an open source software, fulfills these differing requirements in a tightly-integrated manner. E stands for Elasticsearch, L for Logstash, and K for Kibana. Elasticsearch just dumps the logs and provides a fuzzy search capability, Logstash is used to collect logs from different sources and transform them, and Kibana is a graphical user interface (GUI) that helps data scientists, testers, developers, and even businesspeople to insightfully search the logs as per their evolving requirements. Considering the significance of log analytics, there are open source as well as commercial-grade solutions to extract log, operational, performance, scalability, and security insights from microservice interaction log data.
  • Resiliency implementation: There are frameworks and solutions that guarantee reliability (resiliency + elasticity) when services interact with one another.

REST-based microservices are popular not only due to their extreme simplicity, but also due to the fact that services communicate directly (synchronously) with each other over HTTP. This direct communication means that there's no need for any kind of intermediary, such as a hub, bus, broker, or gateway. For example, consider a B2C e-commerce system that instantly notifies customers when a particular product is back in stock. This notification could be implemented via RESTful microservices:

It should be noted that the communication is point-to-point. Still, hardcoding services' addresses isn't a good thing to do. Therefore, the prominent workaround is to leverage a service discovery mechanism, such as Eureka or Consul. These are highly available centralized servers where services register their API addresses. The availability status of services for instantaneous serving is registered with the centralized servers. Client services can request a specific API address from this centralized server in order to identify and leverage the appropriate services. Still, there are several shortcomings, which are listed as follows:

  • Blocking: Due to the synchronous nature of the REST approach, the update stock operation won't do anything until the notification service completes its task of notifying all relevant customers. If there are thousands of customers wishing to be notified of the additional stock, the system's performance is bound to degrade sharply. This performance issue happens due to the tight-coupling approach. One way to overcome these issues is to embrace the pipeline pattern. The architecture diagram then gets modified as follows:

Here, the communication is still REST-based, but the real shift is that the point-to-point communication is eliminated forever. The Pipeline entity is entirely responsible for orchestrating control and data flows. The services are totally decoupled, and this decoupling makes microservices autonomous. However, with this approach, the services must rely on the pipeline orchestration in order to contribute to the cause and, hence, services are self-defined, yet not self-sufficient.

  • Asynchronous messaging: Consider a typical messaging-based system. Here, both the services—input and output—can be defined as commands or events. Each of these subscribes to the events that it's interested in consuming. Further, these events are received reliably through a mechanism, such as a messaging queue/broker, when the events are placed on the queue by other services. With this approach, the stock notification subsystem could now be remodeled as follows:

This refurbished architecture brings forth a number of crucial advantages, such as enhanced flexibility, service isolation, and autonomy. This shift eases the addition, removal, or modification of services without affecting the operation or code of other services. Any kind of service failure can be gracefully handled. These need to be carefully considered when designing and developing microservices-based enterprise applications.

As technologies become increasingly complex, best practices and procedures sourced through various experimentation come in handy for architects and developers to create strategically sound software systems. As microservices emerge as the most optimal building block for production-grade and extensible business and IT systems, our focus gets turned toward the ways of leveraging the matured and stabilized REST paradigm to create and sustain business-critical and microservices-centric software applications. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.168.163