Requesting tracing using Sleuth

We have seen how to get solutions for the distributed and fragmented logging to the centralized logging architecture. So, with this approach, we have solved the problems related to the distributed logging into separate local machines, now we have to aggregate all the logs in central storage. But how do we trace these logs for a single request for end-to-end transactions? All transactions are spreading across microservices, so in order to track them from end to end, we need a correlation ID. We need a solution that can focus on tracking how a request travels through the microservices, especially when you may not have any insight into the implementation of the microservice you are calling.

Spring Cloud provides a library, Spring Cloud Sleuth, to help with this exact problem. Spring Cloud Sleuth provides unique IDs to each log message, and this unique ID will be consistent across micsroservice calls for a single request. By using this unique ID, you can find all log messages generated for a transaction. Twitter's Zipkin, Cloudera's HTrace, and Google's Dapper are examples of distributed tracing systems.

Spring Cloud Sleuth has two key concepts, Span and Trace, and it works based on these two concepts. It creates IDs for these two concepts, which are Span ID and Trace ID. Span means a basic unit of a task and Span ID represents a unit of a task, such as an HTTP call to a resource. Trace means a set of tasks or set of spans, which means a Trace ID denotes a set of Span IDs generated for end-to-end transactions. So, for a specific task, the trace ID will be the same across the microservices calls. You can use the trace ID to track a call from end to end:

As you can see in the preceding diagram, there are multiple microservices running on different nodes. So, Microservice A calls B and C, and B calls D, D calls E, and so on. In this case, as you can see in the diagram, the trace ID will be passed across all microservices and this trace ID will be used for tracking end-to-end log transactions.

Let's update our previous example of the Account and Customer microservices. A new Maven dependency will be added for the Spring Cloud Sleuth library. These are the steps to create an example using Spring Cloud Sleuth in your distributed application:

  1. Add another Maven dependency for Spring Cloud Sleuth in your distributed application:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
  1. The Logstash dependency will be same as we have added in previous examples for implementing centralized logging.
  2. You can set the application name by setting a spring.application.name property in either application.yml or bootstrap.yml. But you can also add this application name into the Logback configuration file of each microservice:
<property name="spring.application.name" value="account-service"/>
<property name="spring.application.name" value="customer-service"/>

The preceding given application name will show up as part of the tracing produced by Spring Cloud Sleuth.

  1. Add log messages if you don't have any, and also ensure one service can call another service to check log tracing in this distributed application. I have added one request method to demonstrate the propagation of the trace ID across multiple microservices. This method in the customer service will call the account service to fetch account information of a customer using RestTemplate and also added log messages on these methods of both services.

In the CustomerController class:

@GetMapping(value = "/customer/{customerId}")
public Customer findByAccountId (@PathVariable Integer customerId){
Customer customer = customerRepository.findByCustomerId(customerId);
logger.info("Customer's account information by calling account-service ");
List<Account> list = restTemplate.getForObject("http://localhost:6060/account/customer/"+customerId, List.class, customer);
customer.setAccount(list);
logger.info("Find Customer information by id with fetched account info: "+customerId);
return customer;
}

In the AccountController class:

@GetMapping(value = "/account/customer/{customer}")
public List<Account> findByCutomer (@PathVariable Integer customer){
logger.info("Find all Accounts information by customer: "+customer);
return accountRepository.findAllByCustomerId(customer);
}
  1. Run both services Customer and Account, and hit the following endpoint in the browser: http://localhost:6161/customer/1001.
  2. Let's look at the log messages on the console logs to see the trace ID and span IDs printed.

The Customer microservice console logs:

2018-05-09 00:51:00.639 INFO [customer-service,9a562435c0fb488a, 9a562435c0fb488a,false] Customer's account information by calling account-service
2018-05-09 00:51:00.766 INFO [customer-service,9a562435c0fb488a, 9a562435c0fb488a,false] Find Customer information by id with fetched account info: 1001

As you can see in the preceding log statement, Sleuth adds [customer-service,9a562435c0fb488a, 9a562435c0fb488a, false]. The first part (customer-service) is the application name, the second part is the trace ID, the third part is the span ID, and the last part indicates whether the span should be exported to Zipkin.

The Account microservice console logs:

2018-05-09 00:51:00.741 INFO [account-service, 9a562435c0fb488a, 72a6bb245fccafd9,false] Find all Accounts information by customer: 1001
2018-05-09 00:53:38.109 INFO [account-service,, ] Resolving eureka endpoints via configuration

You can see in the preceding logs of both services, the trace IDs are the same but the span IDs are different.

You can also check the same thing on the Kibana dashboard:

We have discussed the Sleuth library to store the log messages. Let’s see how Zipkin helps to analyze the latency of the service calls.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.194.123