5

Connecting Microservices

The previous part was focused on getting started with Docker. Once we had installed Docker on our workstation, we learned more about Docker Compose and its day-to-day usage. We learned to combine Docker images using Compose and running multi-container applications. After successfully running multi-container applications, we moved on to more advanced concepts, such as Docker volumes and networks. Volumes helped to define how to store and share data, while networks made it possible to isolate certain applications and access them only through a specific network. During this process, we gradually moved away from using Docker CLI commands to Docker Compose commands. By using Compose commands, our focus shifted to the Compose application we provisioned, and it was possible to interact with the containers, monitor and execute administrative commands upon them, and focus our operations on the resources provisioned by Compose.

Since we have already created multi-container applications using Compose, we can now proceed with more advanced scenarios involving Compose. Nowadays, applications have become more complex. This leads to the need to split an application into multiple applications, either for scaling or team purposes. Microservices are the new norm. By splitting a problem into smaller parts, teams can benefit by increasing their delivery rate. Also, microservices can help with performance tuning and scale the parts of an application that matter the most.

Although the concept of microservices existed long before the introduction of Docker, it played a crucial role in mass microservice adoption. The way services can be isolated and deployed everywhere, packaged with the tools needed, made it possible to reduce the cost and effort of deploying a microservice to a virtual or physical machine.

By using microservices, an application is split into multiple parts. Communication between the services is crucial. There are public-facing microservices, the entry points of an application, and microservices that are only internal.

In this chapter, we will focus on the application introduced in Chapter 2, Running the First Application Using Compose, the Task Manager application, and transform it into a microservice-based application. We will introduce a microservice, the geolocation service, which will be used by the Task Manager. By introducing this service, we will add it to a network that can be accessed only by the Task Manager application. After that, another service will be introduced, which will generate analytics based on data streamed to Redis.

Overall, this chapter will focus on the following topics:

  • Introducing the location microservice
  • Adding the location microservice to Compose
  • Adding the location microservice to a private network
  • Executing requests to a location microservice
  • Streaming task events
  • Adding a task events processing microservice

Technical requirements

The code for the book is hosted on GitHub at https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose. If there is an update to the code, it will be updated on the GitHub repository.

Introducing the location microservice

By using the Task Manager application, introduced in Chapter 2, Running the First Application Using Compose, we will enhance its functionality by adding a location where a task should take place. Each task will have a location. By gathering those tasks, the locations will be stored; thus, each time a task is created, locations that have been previously visited will be recommended.

We shall create the location service as a new microservice. The service will not share anything with the Task Manager. It will have an API of its own. For simplicity, we shall use the same programming language we used previously, Golang, as well as the same database, Redis.

Let’s proceed with a Redis instance. Since will we use Compose, the following will be our configuration:

services:  
  redis:  
    image: redis

We can run in detached mode:

$ docker compose -f redis.yaml up -d

We shall create the location service project and add the gin and redis-go dependencies.

Since our tech stack will be the same, we should execute the same initialization steps we executed in Chapter 2, Running the First Application Using Compose.

These are the initialization steps:

go mod init location_service

go get github.com/gin-gonic/gin

go get github.com/go-redis/redis/v8

After this, we shall create the main.go file, which contains the base of our application. We shall use the gin framework as we did before, as well as the Redis database; therefore, we should use the same helper methods, which we can get from GitHub: https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose/blob/main/Chapter5/location-service/main.go.

Since we have the project set, we can proceed with the application logic. An important part of our service is the location model. The location model will hold information such as the unique ID we give to that location, the longitude and latitude, the name of that location, as well as its description. Since we use a REST API, the location model will be marshaled to JSON and returned through the API calls.

The location model to be used in the code base is as follows:

type Location struct {
    Id          string  `json:"id"`
    Name        string  `json:"name"`
    Description string  `json:"description"`
    Longitude   float64 `json:"longitude"`
    Latitude    float64 `json:"latitude"`
}

This service is based on the concept of geolocation; thus, the proper Redis data structures need to be chosen. The location model can be represented in hmset. hmset makes it possible to fetch the object in a key-value manner. By using a prefix and the ID of the object (location:id), we can have multiple location objects. Also, thanks to the functionality of hmset, we can fetch the individual members of the objects.

Another important aspect of a location is distance. We would like to be able to retrieve locations in our database based on a location and up to a certain distance. For this purpose, Redis provides us with the Geohash technique. By using GeoAdd, we add locations to a sorted set. A hash is generated using the latitude and longitude and is used as a ranking by the various Geohash functions that can operate on a sorted set. This makes it feasible, for example, to find locations that are within a certain distance from a location or even calculate the distance between two locations stored on the sorted set. Based on these details, to store a location, we shall store it to a hash and make an entry to a sorted set using GeoAdd.

The function that persists the location is as follows:

// Chapter5/location-service/main.go:177
func persistLocation(c context.Context, location Location) error {
	hmset := client.HSet(c,
		fmt.Sprintf(locationIdFormat, location.Id), "Id",
		location.Id, "Name",
		location.Name, "Description",
		location.Description, "Longitude",
		location.Longitude, "Latitude",
		location.Latitude)
	if hmset.Err() != nil {
		return hmset.Err()
	}
	geoLoc := &redis.GeoLocation{Longitude: location.Longitude, Latitude: location.Latitude, Name: location.Id}
	gadd := client.GeoAdd(c, "locations", geoLoc)
	if gadd.Err() != nil {
		return gadd.Err()
	}
	return nil
}

This is the method that retrieves the location from the hash:

// Chapter5/location-service/main.go:153
func fetchLocation(c context.Context, id string) (*Location, error) {
	hgetAll := client.HGetAll(c, fmt.Sprintf(locationIdFormat, id))
	if err := hgetAll.Err(); err != nil {
		return nil, err
	}
	ires, err := hgetAll.Result()
	if err != nil {
		return nil, err
	}
	if l := len(ires); l == 0 {
		return nil, nil
	}
	latitude, _ := strconv.ParseFloat(ires["Latitude"], 64)
	longitude, _ := strconv.ParseFloat(ires["Longitude"], 64)
	location := Location{Id: ires["Id"], Name: ires["Name"], Description: ires["Description"], Longitude: longitude, Latitude: latitude}
	return &location, nil
}

Since we would like to retrieve existing locations based on distance and location, we shall use the Redis spatial functions. A method will be implemented, using the GEORADIUS method, on the set that we added elements to using GEOADD previously:

// Chapter5/location-service/main.go:124
[...]
func nearByLocations(c context.Context, longitude float64, latitude float64, unit string, distance float64) ([]LocationNearMe, error) {
	var locationsNearMe []LocationNearMe = make([]LocationNearMe, 0)
	query := &redis.GeoRadiusQuery{Unit: unit, WithDist: true, Radius: distance, Sort: "ASC"}
	geoRadius := client.GeoRadius(c, "locations", longitude, latitude, query)
	if err := geoRadius.Err(); err != nil {
		return nil, err
	}
	geoLocations, err := geoRadius.Result()
	if err != nil {
		return nil, err
	}
	for _, geoLocation := range geoLocations {
		if location, err := fetchLocation(c, geoLocation.Name); err != nil {
			return nil, err
		} else {
			locationsNearMe = append(locationsNearMe, LocationNearMe{
				Location: *location,
				Distance: geoLocation.Dist,
			})
		}
	}
	return locationsNearMe, nil
}
[...]

Based on the distance from the coordinates and the distance limit provided, the locations closest to the point will be returned in ascending order.

Since the core methods are implemented, we shall create the REST API using gin:

// Chapter5/location-service/main.go:49
[...]
r.GET("/location/:id", func(c *gin.Context) {
	id := c.Params.ByName("id")
 
	if location, err := fetchLocation(c.Request.Context(), id); err != nil {
        [...]
	} else if location == nil {
		[...]
	} else {
		[...]
	}
 
})
 
r.POST("/location", func(c *gin.Context) {
	var location Location
 
	[...]
	if err := persistLocation(c, location); err != nil {
		c.JSON(http.StatusInternalServerError, gin.H{"location": location, "created": false, "message": err.Error()})
		return
	}
	[...]
})
 
r.GET("/location/nearby", func(c *gin.Context) {
	[...] 
	if locationsNearMe, err := nearByLocations(c, longitude, latitude, unit, distance); err != nil {
		c.JSON(http.StatusInternalServerError, gin.H{"message": err.Error()})
		return
	} else {
		c.JSON(http.StatusOK, gin.H{"locations": locationsNearMe})
	}
 
})

We shall run the application and execute some requests:

$ go run main.go

And location using curl:

$ curl --location --request POST 'localhost:8080/location/'

--header 'Content-Type: application/json'

--data-raw '{

        "id":"0c2e2081-075d-443a-ac20-40bf3b320a6f",

        "name": "Liverpoll Street Station",

    "description": "Station for Tube and National Rail",

        "longitude": -0.082966,

    "latitude": 51.517336

}'

{"created":true,"location":{"id":"0c2e2081-075d-443a-ac20-40bf3b320a6f","name":"Liverpoll Street Station","description":"Station for Tube and National Rail","Longitude":-0.082966,"Latitude":51.517336},"message":"Location Created Successfully"}

Find a location using the location ID:

$ curl --location --request GET 'localhost:8080/location/0c2e2081-075d-443a-ac20-40bf3b320a6f'

{"location":{"id":"0c2e2081-075d-443a-ac20-40bf3b320a6f","name":"Liverpoll Street Station","description":"Station for Tube and National Rail","Longitude":-0.082966,"Latitude":51.517336}}

Finally, we will retrieve the location near a point specified, limited by a certain distance. Because we would like to know the distance to each location, we need a new model. The model will contain the location itself as well as the distance calculated:

type LocationNearMe struct { 
    Location Location `json:"location"` 
    Distance float64 `json:"distance"` 
} 

Retrieve the locations near to a point using curl:

curl --location --request GET 'localhost:8080/location/nearby?longitude=-0.0197&latitude=51.5055&distance=5&unit=km'

{"locations":[{"location":{"id":"0c2e2081-075d-443a-ac20-40bf3b320a6f","name":"Liverpoll Street Station","description":"Station for Tube and National Rail","Longitude":-0.082966,"Latitude":51.517336},"distance":4.5729}]}

To assist the preceding commands, a Postman collection can be found in the book’s repository (https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose/blob/main/Chapter5/location-service/Location%20Service.postman_collection.json).

We have created our first microservice in this section. We stored the locations and utilized the spatial capabilities of Redis to assist in searching for existing locations. The Task Manager should interact with the location service by using the REST interface provided. In the next section, we shall package the application in a Docker image and create the Compose application.

Adding a location service to Compose

We have implemented the service and have been able to add locations and execute spatial queries. The next step is to package the application using Docker and run it through Compose.

The first step is to create the Dockerfile. The same steps we followed in the previous chapter also apply here:

  1. Adding a Dockerfile
  2. Building the image using Compose

This is the Dockerfile for the location service:

# syntax=docker/dockerfile:1
FROM golang:1.17-alpine
 
RUN apk add curl
 
WORKDIR /app
 
COPY go.mod ./
COPY go.sum ./
 
RUN go mod download
 
COPY *.go ./
 
RUN go build -o /location_service
 
EXPOSE 8080
 
CMD [ "/location_service" ]

The Dockerfile is in place, and we can now proceed to run it through Compose. Now, to test the application, we need Redis and the image of the application to be built.

Our docker-compose.yaml, at this stage, should look like this:

services: 
  location-service: 
    build: 
      context: ./location-service 
    image: location-service:0.1 
    environment: 
      - REDIS_HOST=redis:6379 
    depends_on: 
      - redis 
    healthcheck: 
      test: ["CMD", "curl", "-f", "http://localhost:8080/ping"] 
      interval: 10s 
      timeout: 5s 
      retries: 5 
      start_period: 5s 
  redis: 
    image: redis

Note that the ports section is missing. This is because the location service is going to be a private one. It shall be accessed only by other containers hosted on Compose.

An image called location-service will be built once we spin up the Compose application. The Redis image should work as it is.

We managed to package the location-service microservice using Docker and run it through Compose. Since we will introduce the Task Manager, we need to revise the networks that the Compose application will provision.

Adding a network for the location microservice

We can now specify the network on which the application shall run instead of using the default network, as we did previously. The network shall be named geolocation-network, and we also need a network for Redis. We shall add those networks to Compose:

services:
  location-service:
[...]
    networks:
      - location-network
      - redis-network
[...]
  redis:
    image: redis
    networks:
      - redis-network
networks:
  location-network:
  redis-network:

Redis does not expose any port locally; the geolocation service is able to access the service only because it has redis-network included in the networks section. redis-network is a familiar name. It is the same network name we used in Chapter 3, Network and Volumes Fundamentals. Since our microservice is up and running on a dedicated network, we can now proceed with integrating it with the Task Manager application.

Executing requests to the location microservice

Previously, we successfully ran the recently introduced location microservice using Compose. However, the application will be unusable if we do not use it along with the Task Manager. By integrating the Task Manager with the location service, the user should be able to specify a location when creating a task. If the user also retrieves one of the existing tasks, locations near the task’s location shall be presented.

The Task Manager would have to communicate with the location service. For this reason, we shall create a service inside the Task Manager that will issue requests to the location microservice. The same models we used on the location service will also be used for this module.

The location service module in the Task Manager application can be found on GitHub: https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose/blob/main/Chapter5/task-manager/location/location_service.go.

Since the Task Manager will have various feature additions in this chapter, it makes sense to refactor the code. The models should change and include the location models we defined previously:

type Task struct {
Id          string             `json:"id"`
Name        string             `json:"name"`
Description string             `json:"description"`
Timestamp   int64              `json:"timestamp"`
Location    *location.Location `json:"location"`
}

Also, we shall separate the persistence methods from the controller method and move them to another file, task_service.go (https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose/blob/main/Chapter5/task-manager/task/task_service.go).

If we take a close look at the logic, the user is free to add just a task without specifying the location. If the location is specified and the ID already exists, it will pass through without any persistence to the location service. If the location is specified and does not exist, then we can specify the entire location payload and persist it. By specifying only an existing location ID, we do not have to specify the entire payload. Since we modularized the Task Manager code base, we can proceed to adapt the http controllers (https://github.com/PacktPublishing/A-Developer-s-Essential-Guide-to-Docker-Compose/blob/main/Chapter5/task-manager/main.go).

Now that the Task Manager is adapted, we can update the Compose application and add the Task Manager interacting with location-service.

There are some requirements before doing so:

  • The Task Manager requires Redis and location-service to be up and running.
  • The Task Manager needs to have access to the networks of the preceding services.
  • The Task Manager is an entry point; thus, we shall bind the port to host.

The Docker image shall be built the same way we did in Chapter 2, Running the First Application Using Compose, but we do need to add the extra source code created previously:

# syntax=docker/dockerfile:1
FROM golang:1.17-alpine
RUN apk add curl
WORKDIR /app
RUN mkdir location 
RUN mkdir task 
RUN mkdir stream
COPY go.mod ./
COPY go.sum ./
RUN go mod download
COPY *.go ./
COPY location/*.go ./location
COPY task/*.go ./task
COPY stream/*.go ./stream
RUN go build -o /task_manager
EXPOSE 8080
CMD [ "/task_manager" ]

We can now see the Compose file, including the Task Manager and the location service:

// Chapter5/task-manager/docker-compose.yaml:19
[...]
  task-manager: 
    build: 
      context: ./task-manager/
    image: task-manager:0.1
    ports: 
      - 8080:8080 
    environment: 
      - REDIS_HOST=redis:6379 
      - LOCATION_HOST=http://location-service:8080
    depends_on: 
      - redis 
      - location-service
    networks: 
      - location-network 
      - redis-network 
    healthcheck: 
      test: ["CMD", "curl", "-f", "http://localhost:8080/ping"] 
      interval: 10s 
      timeout: 5s 
      retries: 5 
      start_period: 5s 
[...]

The Task Manager was created and integrated with the location service. The location service remained an internal microservice without the need to expose it. The Task Manager established communication through a REST endpoint. In the next section, we shall evaluate accessing the application using a message-based form of communication.

Streaming task events

We have been successful previously in running the new microservice using Compose. However, we would like to know how many times a location has been visited or how many tasks have been created over time.

This is a data-driven task. We want to capture and stream information about our application. Redis provides us with streams. By using streams, our application can stream data that can later be processed by another application and create the analytics of our choice.

This will be possible with a simple adaptation to our code. Once a task is added, a message shall be published to a Redis stream.

We will add a service to the Task Manager that will be able to stream events. For now, only when adding a task will a message be sent.

The following code base is the implementation of the TaskStream service, which will be responsible for sending messages on task creation:

// Chapter5/task-manager/task/task-service.go:14
[...] 
type TaskMessage struct {
	taskId      string
	location_id string
	timestamp   int64
}
 
func CreateTaskMessage(taskId string, location *location.Location, timestamp int64) TaskMessage {
	taskMessage := TaskMessage{
		taskId:    taskId,
		timestamp: timestamp,
	}
 
	if location != nil {
		taskMessage.location_id = location.Id
	}
 
	return taskMessage
}
 
func (ts *TaskMessage) toXValues() map[string]interface{} {
	return map[string]interface{}{"task_id": ts.taskId, "timestamp": ts.timestamp, "location_id": ts.location_id}
}
 
func (ts *TaskStream) Publish(c context.Context, message TaskMessage) error {
 
	cmd := ts.Client.XAdd(c, &redis.XAddArgs{
		Stream: "task-stream",
		ID:     "*",
		Values: message.toXValues(),
	})
 
	if _, err := cmd.Result(); err != nil {
		return err
	}
 
	return nil
}

Since we have the message-sending functionality implemented, we will change the PersistTask method in order to send an update once a task is created:

// Chapter5/task-manager/task/task-service.go:28
[...]
func (ts *TaskService) PersistTask(c context.Context, task Task) error {
 
	values := []interface{}{"Id", task.Id, "Name", task.Name, "Description", task.Description, "Timestamp", task.Timestamp}
 
	if task.Location != nil {
		if err := ts.LocationService.AddLocation(task.Location); err != nil {
			return err
	}
	values = append(values, "location", task.Location.Id)
}
 
	hmset := ts.Client.HSet(c, fmt.Sprintf("task:%s", task.Id), values)
 
	if hmset.Err() != nil {
		return hmset.Err()
	}
 
	z := redis.Z{Score: float64(task.Timestamp), Member: task.Id}
	zadd := ts.Client.ZAdd(c, "tasks", &z)
 
	if zadd.Err() != nil {
		return hmset.Err()
	}
 
	mes := stream.CreateTaskMessage(task.Id, task.Location, task.Timestamp)
 
	return ts.TaskStream.Publish(c, mes)
}
[...]

So far, we have enhanced our application to send events on task insertions. In the next section, we shall proceed with consuming those messages.

Adding a task events processing microservice

In the previous section, we produced events regarding our Task Manager application. This enables us to add an application that shall be message-driven. For now, the events service will consume the data from a Redis stream and print data on the console.

Our code base will be lean and require only the Redis client.

Let’s add the code that will consume the events:

[...]
client. XGroupCreateMkStream (ctx, stream, consumerGroup, "0").Result()
 
for {
	entries, err := client.XReadGroup(ctx,
		&redis.XReadGroupArgs{
			Group:    consumerGroup,
			Consumer: consumer,
			Streams:  []string{stream, ">"},
			Count:    1,
			Block:    0,
			NoAck:    false,
		},
	).Result()
 
	for i := 0; i < len(entries[0].Messages); i++ {
		messageID := entries[0].Messages[i].ID
		values := entries[0].Messages[i].Values
 
		taskId := values["task_id"]
		timestamp := values["timestamp"]
		locationId := values["location_id"]
 
		log.Printf("Received %v %v %v", taskId, timestamp, locationId)
 
		client.XAck(ctx, stream, consumerGroup, messageID)
	}
}
[...]

Let’s create the Dockerfile for it:

# syntax=docker/dockerfile:1
 
FROM golang:1.17-alpine
 
RUN apk add curl
 
WORKDIR /app
 
COPY go.mod ./
COPY go.sum ./
 
RUN go mod download
 
COPY *.go ./
 
RUN go build -o /events_service
 
CMD [ "/events_service" ]

Then, we should add a task service to Compose:

services: 
  location-service: 
    [...]
  task-manager: 
    [...]
  event-service: 
    build: 
      context: ./events-service/
    image: event-service:0.1 
    environment: 
      - REDIS_HOST=redis:6379 
    depends_on: 
      - redis 
    networks: 
      - redis-network 
networks:
  location-network:
  redis-network:

It is obvious that the new service does not need to have as many settings on Compose as the REST-based service. Being stream-based, it only needs the connection to the Redis stream.

Since all the service’s configurations are in place, we can run the application and observe task events getting streamed to the events microservice:

$ docker compose up

...

chapter5-event-service-1     | 2022/05/08 09:03:38 Received 8b171ce0-6f7b-4c22-aa6f-8b110c19f83a 1645275972000 0c2e2081-075d-443a-ac20-40bf3b320a6f

...

We have been successful in listening to events when a task is created, and incorporated that code into our existing Compose application.

Summary

In this chapter, we created two microservices that will integrate with the Task Manager. The microservices had a different nature; one used REST-based communication, and the other was message-driven. Regardless of the differences, by using Compose, it was possible to orchestrate those microservices and isolate them. As expected, monitoring plays a crucial part in the service we created. By monitoring properly, we can ensure availability and smooth usage for the end user.

The next chapter will be focused on monitoring and how to achieve it by using Prometheus.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.161.77