In the previous chapter you deployed Consul onto Kubernetes or VMs. In this chapter, you’ll deploy non-mesh services and then learn how to add them to the Consul service mesh. On Kubernetes, this can be done with a simple annotation. On VMs, you have a bit more work to do: you need to register your services with Consul and run their sidecar proxies.
First, let’s examine the services you’re going to deploy.
In order to try out the service mesh, you first need some services. You’re going to use the two example services, frontend
and backend
, from Chapter 1. As detailed in Chapter 1 and shown in Figure 3-1, frontend
hosts a website that is accessed by users. When a user makes a call to frontend
, frontend
in turn makes a subsequent API call to backend
. backend
will respond with Hello from backend. frontend
will respond with its own Hello from frontend and include the response from backend
so you can see whether that call succeeded.
frontend
service. 2) The frontend
service makes a request to backend
. 3) backend
responds Hello from backend. 4) frontend
responds Hello from frontend back to the user and includes backend
’s response as well.At first, you’ll deploy frontend
and backend
without sidecar proxies, and frontend
will call backend
without going through the service mesh. Then you’ll add both services to the service mesh by deploying sidecar proxies. Once the services are in the mesh, traffic between them will flow through their sidecar proxies.
The traffic between the users and frontend
won’t flow through sidecar proxies since users won’t be running a sidecar proxy. For now, you’ll configure frontend
so that users can access it directly, bypassing its sidecar proxy. Chapter 9 covers how to properly expose frontend
with an ingress gateway.
Because we want to focus on the communication between services, we’re not going to build our services from scratch. Instead, you’ll use a pre-built example application called fake-service
to save time. This service happens to be written in Go, but the service mesh doesn’t care which language your applications are written in as long as they use a network protocol like TCP, HTTP, or gRPC. In Kubernetes, you’ll run fake-service
with its Docker image. In VMs, you’ll use the fake-service
binary.
Now that you’ve been introduced to the architecture of the services, it’s time to deploy them. I cover how to do this on Kubernetes first, and then move onto VMs (“Deploying Service Mesh Services on VMs”).
Deploying the frontend
and backend
services into Kubernetes requires you to create Deployment and Service resources. A Deployment is a Kubernetes resource type used to deploy multiple replicas of an application and manage its lifecycle. A Deployment is made up of a set number of Pods1. For this example, you’re going to deploy one backend
Pod and one frontend
Pod.
In addition to the Deployment, you need a Service resource. Service resources in Kubernetes are used to configure routing. Specifically, by creating a Service, you create a DNS entry that routes to your Deployment. For example, when you create a Service for your backend
application, the frontend
application can now call backend
using the URL http://backend and its request will be routed to any backend
Pod.
The Kubernetes resource type Service, which is used to configure routing in Kubernetes, can sometimes be confused with the generic term service, which is used to refer to an application. This book always uppercases the Kubernetes resource in order to avoid confusion between the two.
To create resources in Kubernetes, you need to use YAML files. First, you’ll create the YAML files for the regular (non-mesh) frontend
and backend
services and deploy them into Kubernetes.
To deploy frontend
and backend
into Kubernetes, you need to create a Deployment and Service resource for each service. For simplicity, use a separate file for each resource. In the same directory where you created your values.yaml file for the Helm installation, create a new directory called manifests and cd
into it:
$
mkdir manifests$
cd
manifests
A set of Kubernetes YAML files is often called a manifest because it’s a declaration of resources.
Inside the manifests/ directory, create four YAML files with the following contents:
# frontend-deployment.yaml
apiVersion
:
apps/v1
kind
:
Deployment
metadata
:
name
:
frontend
labels
:
app
:
frontend
spec
:
replicas
:
1
selector
:
matchLabels
:
app
:
frontend
template
:
metadata
:
labels
:
app
:
frontend
spec
:
containers
:
-
name
:
frontend
image
:
nicholasjackson/fake-service:v0.21.0
env
:
-
name
:
"LISTEN_ADDR"
value
:
"0.0.0.0:80"
-
name
:
NAME
value
:
"frontend"
-
name
:
MESSAGE
value
:
"Hello
from
frontend"
-
name
:
UPSTREAM_URIS
value
:
"http://backend"
ports
:
-
containerPort
:
80
# frontend-service.yaml
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
frontend
labels
:
app
:
frontend
spec
:
selector
:
app
:
frontend
ports
:
-
protocol
:
TCP
port
:
80
targetPort
:
80
# backend-deployment.yaml
apiVersion
:
apps/v1
kind
:
Deployment
metadata
:
name
:
backend
labels
:
app
:
backend
spec
:
replicas
:
1
selector
:
matchLabels
:
app
:
backend
template
:
metadata
:
labels
:
app
:
backend
spec
:
containers
:
-
name
:
backend
image
:
nicholasjackson/fake-service:v0.21.0
env
:
-
name
:
LISTEN_ADDR
value
:
"0.0.0.0:80"
-
name
:
NAME
value
:
"backend"
-
name
:
MESSAGE
value
:
"Hello
from
backend"
ports
:
-
containerPort
:
80
# backend-service.yaml
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
backend
labels
:
app
:
backend
spec
:
selector
:
app
:
backend
ports
:
-
protocol
:
TCP
port
:
80
targetPort
:
80
Use kubectl apply
to apply these resources to Kubernetes:
$
kubectl apply -f ./deployment.apps/backend created
service/backend created
deployment.apps/frontend created
service/frontend created
Use the kubectl get
command with the --selector
flag to view the frontend
service’s Deployment and Service:
$
kubectl get deployment,service --selectorapp
=
frontendNAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/frontend 1/1 1 1 1s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/frontend ClusterIP 10.96.148.61 <none> 80/TCP 1s
deployment.apps/frontend
should show 1/1
READY and 1
Available, meaning that there is one Pod running and the one container in that Pod is ready.
Use the same command with a difference selector to view the backend
service’s Deployment and Service:
$
kubectl get deployment,service --selectorapp
=
backendNAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/backend 1/1 1 1 1s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend ClusterIP 10.96.38.122 <none> 80/TCP 1s
Now that your services are deployed, let’s see if frontend
can call backend
as expected. Use the kubectl port-forward
command to forward your workstation’s localhost:8080 to the frontend application:
$
kubectl port-forward deployment/frontend 8080:80Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Now in another terminal, use curl
to mimic a user’s call to frontend
:
$
curl
http://localhost:8080
{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.244.0.12"
],
"start_time": "2021-04-04T20:29:54.940665",
"end_time": "2021-04-04T20:29:54.943740",
"duration": "3.07511ms",
"body": "Hello from frontend",
"upstream_calls": {
"http://backend": {
"name": "backend",
"uri": "http://backend",
"type": "HTTP",
"ip_addresses": [
"10.244.0.10"
],
"start_time": "2021-04-04T20:29:54.943236",
"end_time": "2021-04-04T20:29:54.943304",
"duration": "67.339µs",
"headers": {
"Content-Length": "256",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Sun, 04 Apr 2021 20:29:54 GMT"
},
"body": "Hello from backend",
"code": 200
}
},
"code": 200
}
The response from the frontend
service.
The response from the backend
service is included.
The body of the response from the backend
service was “Hello from backend”.
The HTTP response code of the call to backend
.
frontend
is configured to call backend
through its UPSTREAM_URIS
environment variable which is set to http://backend.
You’ve successfully deployed the frontend
and backend
applications into Kubernetes and verified that they are communicating as expected. However, these services aren’t yet part of the service mesh: they’re communicating insecurely over plaintext, there is no way to prevent other applications from calling them, and you can’t control where they’re routing their traffic. The next step is to add them to the service mesh so we can rectify these issues.
On Kubernetes it is easy to add existing services to the service mesh. All you need to do is add an annotation to the Deployment that tells Consul to inject its service mesh configuration: consul.hashicorp.com/connect-inject: true
.
Let’s add this annotation to our two Deployments. First, edit frontend-deployment.yaml and add the annotation:
apiVersion
:
apps/v1
kind
:
Deployment
metadata
:
name
:
frontend
labels
:
app
:
frontend
spec
:
replicas
:
1
selector
:
matchLabels
:
app
:
frontend
template
:
metadata
:
labels
:
app
:
frontend
annotations
:
"
consul.hashicorp.com/connect-inject
"
:
"
true
"
spec
:
containers
:
-
name
:
frontend
image
:
nicholasjackson/fake-service:v0.21.0
env
:
-
name
:
"
LISTEN_ADDR
"
value
:
"
0.0.0.0:80
"
-
name
:
NAME
value
:
"
frontend
"
-
name
:
UPSTREAM_URIS
value
:
"
http://backend
"
ports
:
-
containerPort
:
80
Add annotations:
under the spec.template.metadata
key.
Add the consul.hashicorp.com/connect-inject
annotation and set it to the string "true"
.
It’s important that you add the annotation to the spec.template.metadata
key and not the top-level metadata
key. The top-level metadata
key is for the Deployment and the other key is for the Pods that make up the deployment. Consul is actually watching for Pods (see upcoming sidebar), not Deployments, and so if the annotation isn’t on the Pod, Consul will ignore it.
It’s also important that you set the value to the string "true"
with quotes because otherwise YAML will treat it as a boolean, which isn’t allowed as an annotation value and you’ll get an error.
Now that you’ve modified your YAML, you’re ready to redeploy the frontend application. To do so, use kubectl apply
:
$
kubectl apply -f ./deployment.apps/backend unchanged
service/backend unchanged
deployment.apps/frontend configured
service/frontend unchanged
Kubernetes will handle starting a new Pod with the changes and spinning down the old Pod. You can use the kubectl rollout
command to wait until the redeployment is complete:
$
kubectl rollout status --watch deploy/frontendWaiting for deployment "frontend" rollout to finish...
deployment "frontend" successfully rolled out
Now that the Pod has been redeployed, when you open up the Consul UI, you should see the frontend
service listed as part of the service mesh as shown in Figure 4-2.
frontend
service is now listed in the Consul UI.Now add the backend
service to the service mesh. You’ll add the same annotation to backend-deployment.yaml:
# backend-deployment.yaml
apiVersion
:
apps/v1
kind
:
Deployment
metadata
:
name
:
backend
labels
:
app
:
backend
spec
:
replicas
:
1
selector
:
matchLabels
:
app
:
backend
template
:
metadata
:
labels
:
app
:
backend
annotations
:
"consul.hashicorp.com/connect-inject"
:
"true"
spec
:
containers
:
-
name
:
backend
image
:
nicholasjackson/fake-service:v0.21.0
env
:
-
name
:
LISTEN_ADDR
value
:
"0.0.0.0:80"
-
name
:
NAME
value
:
"backend"
-
name
:
MESSAGE
value
:
"Hello
from
backend"
ports
:
-
containerPort
:
80
Redeploy it with kubectl apply
:
$
kubectl apply -f ./deployment.apps/backend configured
service/backend unchanged
deployment.apps/frontend unchanged
service/frontend unchanged
Wait for the deployment to complete:
$
kubectl rollout status --watch deploy/backendWaiting for deployment "backend" rollout to finish:
1 old replicas are pending termination...
deployment "backend" successfully rolled out
Now when you view the UI (see “Consul UI” for how to access the UI), you should see both services listed as in Figure 3-3.
frontend
and backend
services are now also listed in the Consul UI.In order to test out the connection between frontend
and backend
, use the port-forward
and curl
combination. You will first need to stop the previous port-forward
command with Ctrl-C
if it’s still running because it’s pointing at a Pod that’s no longer running. Once you stop the old port-forward
, start a new one:
$
kubectl port-forward deployment/frontend 8080:80
And then make your curl
call again in another terminal:
$
curl http://localhost:8080{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.244.0.15"
],
"start_time": "2021-07-11T19:27:47.702816",
"end_time": "2021-07-11T19:27:47.708736",
"duration": "5.919404ms",
"body": "Hello from frontend",
"upstream_calls": {
"http://backend": {
"name": "backend",
"uri": "http://backend",
"type": "HTTP",
"ip_addresses": [
"10.244.0.16"
],
"start_time": "2021-07-11T19:27:47.707571",
"end_time": "2021-07-11T19:27:47.707688",
"duration": "117.368µs",
"headers": {
"Content-Length": "257",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Sun, 11 Jul 2021 19:27:47 GMT"
},
"body": "Hello from backend",
"code": 200
}
},
"code": 200
}
If everything is working as expected then the response will be exactly the same as before we added the services to the mesh. In order to verify that requests are going through the service mesh, you can use the Consul UI’s topology view that shows metrics emitted from the sidecar proxies.
To view the topology page for the backend
service that is receiving traffic from frontend
, click the backend
service from the All Services view or navigate to http://localhost:8500/ui/dc1/services/backend/topology. You should see something similar to Figure 3-4.
If you don’t see any metrics at first, they may take a minute or two to show up. To generate more metrics, re-run the curl
command 5-10 times.
If you’re seeing metrics in the topology view that means you’ve successfully added your services to the mesh! Chapter 6 covers metrics in more detail.
In the next section, you’ll learn how to add services running on VMs into the mesh. If you’re only interested in Kubernetes, you can feel free to skip ahead to the next chapter where you’ll learn about service mesh security–specifically around encryption and authorization. In fact, even with these simple changes you’ve made here, your applications are already more secure because all the traffic flowing between frontend
and backend
is being automatically encrypted.
If you’re interested in also deploying onto VMs, read on as I cover how to add VM services into the service mesh.
In this section, I cover how to run service mesh services on VMs. First, you’ll deploy your two example services onto the VM and start them using systemd
. Next, you’ll register the services into Consul so that Consul knows what services are running and on which ports. Finally, you’ll start each service’s sidecar proxy and configure the services to route traffic through the proxies. At this point, your services will be fully in the mesh and you’ll be ready to secure, observe, and control their traffic in subsequent chapters.
You’re going to use two pre-built example services so you can focus on the VM deployment rather than a particular service’s code. First, you need to get the service binaries onto your VM and then configure them to start with systemd
.
First, ssh
into the VM with vagrant ssh
:
$
vagrant ssh
Download the service binaries onto the VM, using wget
:
$
wget https://github.com/nicholasjackson/fake-service
/releases/download/v0.21.0/fake_service_linux_amd64.zip
…
‘fake_service_linux_amd64.zip’ saved
Next install unzip:
$
sudo apt install unzip -y
Then unzip the .zip
file:
$
unzip fake_service_linux_amd64.zipArchive: fake_service_linux_amd64.zip
inflating: fake-service
chmod
it so it’s executable and then mv
it to /usr/local/bin:
$
chmod +x fake-service$
sudo mv ./fake-service /usr/local/bin/fake-service
Check that you can run it:
$
fake-service -hUsage of fake-service:
-help
--help to show env help
Finally, clean up the .zip
file:
$
rm ./fake_service_linux_amd64.zip
Now that your service binary is on your VM, you’re ready to start up your services using systemd
.
In this example you’re provisioning your services onto the VM manually. In a real life use-case you would use provisioning tools like Puppet, Ansible, or Packer to install and configure service binaries.
systemd
Just like with how you are running Consul as a long-running background service under systemd
, you want to do the same with your two applications frontend
and backend
. To do so, create two systemd
unit files: frontend.service and backend.service in /etc/systemd/system/.
$
sudo touch /etc/systemd/system/frontend.service$
sudo touch /etc/systemd/system/backend.service
Edit the frontend.service file and add the following:
[Unit] Description="Frontend service" # The service requires the VM's network # to be configured, e.g. an IP address has been assigned. Requires=network-online.target After=network-online.target [Service] # ExecStart is the command to run. ExecStart=/usr/local/bin/fake-service # Environment sets environment variables. # Set the frontend service to listen # on port 8080. Environment=LISTEN_ADDR=0.0.0.0:8080 Environment=NAME=frontend # Set UPSTREAM_URIs to http://localhost:9090 because # that's the port you'll run your backend service on. Environment=UPSTREAM_URIS=http://localhost:9090 # The Install section configures this service to start # automatically if the VM reboots. [Install] WantedBy=multi-user.target
Next edit the backend.service file and add the following:
[Unit] Description="Backend service" Requires=network-online.target After=network-online.target [Service] ExecStart=/usr/local/bin/fake-service # Set the backend service to listen # on port 9090. Environment=LISTEN_ADDR=0.0.0.0:9090 Environment=NAME=backend Environment=Message="Hello from backend" [Install] WantedBy=multi-user.target
A unit in systemd
means any resource managed by systemd
, for example the long-running Consul service. A unit file is a systemd
configuration file that tells systemd
how to run and manage that unit.
With the unit files in place, you’re ready to enable and start your services. A service in systemd
must first be enabled
via systemctl
:
$
sudo systemctlenable
frontend.serviceCreated symlink ...
$
sudo systemctlenable
backend.serviceCreated symlink ...
Now start the services:
$
sudo systemctl start frontend.service$
sudo systemctl start backend.service
You can check their statuses with systemctl status
:
$
sudo systemctl status frontend.service● frontend.service - "Frontend service"
Loaded: loaded...
Active: active (running)...
$
sudo systemctl status backend.service● backend.service - "Backend service"
Loaded: loaded...
Active: active (running)...
Finally, you can use curl
to make a request to frontend
which is listening on localhost:8080. Because frontend
is configured to make a call to backend
via its UPSTREAM_URIS
environment variable, you should also see that call in the response:
$
curl http://localhost:8080{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"172.31.56.8"
],
"start_time": "2021-04-11T18:13:00.092110",
"end_time": "2021-04-11T18:13:00.095565",
"duration": "3.455289ms",
"body": "Hello from frontend",
"upstream_calls": {
"http://localhost:9090": {
"name": "Service",
"uri": "http://localhost:9090",
"type": "HTTP",
"ip_addresses": [
"172.31.56.8"
],
"start_time": "2021-04-11T18:13:00.094039",
"end_time": "2021-04-11T18:13:00.094256",
"duration": "217.486µs",
"headers": {
"Content-Length": "257",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Sun, 11 Apr 2021 18:13:00 GMT"
},
"body": "Hello from backend",
"code": 200
}
},
"code": 200
}
Success! In the response you can see the upstream_calls
to http://localhost:9090 (the backend
service) responding with success code 200
. The current architecture is shown in Figure 3-5.
frontend
listens on port 8080 and backend
listens on port 9090. When you call frontend
, it makes a request to backend
.Now you’re ready to register your services into Consul.
Services must be registered into Consul so Consul knows what’s running and on what IPs and ports. Services can be registered via configuration files or API. In dynamic systems like container orchestrators it makes sense to use the API, but on VMs it’s often simpler to use configuration files because you know ahead of time which services will be running where.
Create the service configuration files in Consul’s configuration directory /etc/consul.d:
$
sudo touch /etc/consul.d/frontend.hcl$
sudo touch /etc/consul.d/backend.hcl
Next change the group and owner to be consul
since Consul needs to read these files:
$
sudo chown consul:consul /etc/consul.d/frontend.hcl$
sudo chown consul:consul /etc/consul.d/backend.hcl
Add the following contents to frontend.hcl:
service
{
name
=
"frontend"
# frontend runs on port 8080.
port
=
8080
# The "connect" stanza configures service mesh
# features.
connect
{
sidecar_service
{
proxy
{
# The "upstreams" stanza configures
# which ports the sidecar proxy will expose
# and what services they'll route to.
upstreams
=
[
{
# Here you're configuring 9091
# to route to the backend service.
destination_name
=
"backend"
local_bind_port
=
9091
}
]
}
}
}
}
And for backend.hcl:
service
{
name
=
"backend"
# backend runs on port 9090.
port
=
9090
# The backend service doesn't call
# any other services so it doesn't
# need an "upstreams" stanza.
connect
{
sidecar_service
{}
}
}
You need to tell Consul to reload its configuration so it picks up the new files. To do so, use the consul reload
command:
$
consul reloadConfiguration reload triggered
You can use the consul catalog services
command to check if your services were registered successfully:
$
consul catalog servicesbackend
backend-sidecar-proxy
consul
frontend
frontend-sidecar-proxy
You should see frontend
and backend
listed, along with their sidecar proxies.
Why are frontend-sidecar-proxy
and backend-sidecar-proxy
listed? Under the hood, Consul treats the sidecar proxies as separate services with a special kind
of type proxy
. In the UI, you won’t see the proxies listed because they’re hidden, but in the API or CLI you will.
In the UI, frontend
and backend
should be listed (Figure 3-6), though they will be shown as unhealthy. This is because their sidecar proxies are not yet running.
frontend
and backend
services should be shown in the UI but they will be listed as unhealthy because their sidecar proxies are not yet running.Now that your services are registered in Consul, it’s time to deploy their sidecar proxies.
For your services to become healthy, they require their sidecar proxies to be running. The sidecar proxies run Envoy so first you need to install Envoy. Then you will configure a systemd
service to run each proxy.
You will be using Envoy as your sidecar proxy so you first need to install Envoy onto your VM.
Use wget
to download Envoy 1.18.3:
$
wget https://archive.tetratelabs.io/envoy/download/
v1.18.3/envoy-v1.18.3-linux-amd64.tar.xz
Unarchive Envoy and mv
it to /usr/local/bin/envoy:
$
tar -xvf envoy-v1.18.3-linux-amd64.tar.xz$
sudo mv envoy-v1.18.3-linux-amd64/bin/envoy /usr/local/bin/envoy
Run envoy --version
to validate the installation:
$
envoy --versionenvoy version: .../1.18.3/Clean/RELEASE/BoringSSL
Finally, clean up the .tar.xz file and folder:
$
rm -rf envoy-v1.18.3-linux-amd64.tar.xz envoy-v1.18.3-linux-amd64
Now that envoy
is installed, you’re ready to configure systemd
to run your sidecar proxies.
Just like with the frontend
and backend
services, you will need to create systemd
unit files. You’ll need one for frontend
’s sidecar proxy and another for backend
’s because you need to run a separate Envoy proxy process for each service.
Consul doesn’t support running a single Envoy proxy for multiple services because each Envoy proxy must encode the identity of the source service into every request.
The command you’ll be using to run the sidecar proxies is:
consul connect envoy
This command accepts a flag, -sidecar-for
, that is used to configure the Envoy proxy for a specific service.
First, create the two unit files:
$
sudo touch /etc/systemd/system/frontend-sidecar-proxy.service$
sudo touch /etc/systemd/system/backend-sidecar-proxy.service
The frontend-sidecar-proxy.service file should have the following contents:
[Unit] Description="Frontend sidecar proxy service" Requires=network-online.target After=network-online.target [Service] ExecStart=/usr/bin/consul connect envoy -sidecar-for frontend -admin-bind 127.0.0.1:19000 [Install] WantedBy=multi-user.target
And backend-sidecar-proxy.service should look like:
[Unit] Description="Backend sidecar proxy service" Requires=network-online.target After=network-online.target [Service] ExecStart=/usr/bin/consul connect envoy -sidecar-for backend -admin-bind 127.0.0.1:19001 [Install] WantedBy=multi-user.target
Next enable the two services:
$
sudo systemctlenable
frontend-sidecar-proxy.service$
sudo systemctlenable
backend-sidecar-proxy.service
And start them up!
$
sudo systemctl start frontend-sidecar-proxy.service$
sudo systemctl start backend-sidecar-proxy.service
Check their statuses:
$
sudo systemctl status frontend-sidecar-proxy.servicefrontend-sidecar-proxy.service - "Frontend sidecar proxy service"
Loaded: loaded...
Active: active (running)...
$
sudo systemctl status backend-sidecar-proxy.servicebackend-sidecar-proxy.service - "Backend sidecar proxy service"
Loaded: loaded...
Active: active (running)...
With the proxies now running, the UI should show everything as healthy (see Figure 3-7).
frontend
and backend
services are now healthy because their sidecar proxies are running.Use curl
again to see the request from frontend
to backend
:
$
curl http://localhost:8080{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"172.31.56.8"
],
"start_time": "2021-04-11T18:13:00.092110",
"end_time": "2021-04-11T18:13:00.095565",
"duration": "3.455289ms",
"body": "Hello from frontend",
"upstream_calls": {
"http://localhost:9090": {
"name": "backend",
"uri": "http://localhost:9090",
"type": "HTTP",
"ip_addresses": [
"172.31.56.8"
],
"start_time": "2021-04-11T18:13:00.094039",
"end_time": "2021-04-11T18:13:00.094256",
"duration": "217.486µs",
"headers": {
"Content-Length": "257",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Sun, 11 Apr 2021 18:13:00 GMT"
},
"body": "Hello from backend",
"code": 200
}
},
"code": 200
}
The response looks the same as before so how do you know it’s being routed through the service mesh? One thing you can try is to stop one of the proxies. If the proxy is stopped then we’d expect the request to fail.
Stop the backend
sidecar proxy:
$
sudo systemctl stop backend-sidecar-proxy
And now try the request again:
$
curl http://localhost:8080{
...
"upstream_calls": {
"http://localhost:9090": {
"name": "backend",
...
"body": "Hello from backend",
"code": 200
The request succeeded with the sidecar proxy not even running so what’s going on?
Unlike in Kubernetes, when adding services to the service mesh on VMs you need to make some changes to your services so that they route traffic through their sidecar proxies. Right now, frontend
is still calling backend
directly and bypassing both its local proxy and backend
’s proxy as shown in Figure 3-8.
frontend
is calling backend
directly–bypassing the sidecar proxies.You need to configure frontend
to route its requests to backend
through its sidecar proxy. Luckily, your frontend
application exposes the environment variable UPSTREAM_URIS
for configuring the URL to use to call backend
. Currently in your frontend.service
file this is set to http://localhost:9090 so all you need to do is set it to http://localhost:9091 which is where its sidecar Envoy proxy is listening.
Edit /etc/systemd/system/frontend.service (note, frontend.service, not frontend-sidecar-proxy.service) and change the UPSTREAM_URIS
environment variable:
... Environment=UPSTREAM_URIS=http://localhost:9091 ...
Now run systemctl daemon-reload
so that systemd
loads the updated configuration and then use systemctl restart
to restart the frontend
service:
$ sudo systemctl daemon-reload $ sudo systemctl restart frontend.service
If you run the curl
command again, you should now see a failure because the backend
proxy is not running:
$
curl localhost:8080...
"upstream_calls": {
"http://localhost:9091": {
"error": "Error communicating with upstream service:
Get "http://localhost:9091/": read… connection reset by peer"
}
Start the backend-sidecar-proxy
service back up:
$
sudo systemctl start backend-sidecar-proxy
And run curl
again. The error should go away (you may need to wait a couple of seconds for the proxy to be healthy):
$
curl localhost:8080{
...
"upstream_calls": {
"http://localhost:9091": {
"name": "backend",
"body": "Hello from backend",
"code": 200
Now you’ve confirmed that the requests between frontend
and backend
are being routed through the service mesh! Figure 3-9 shows what the architecture looks like now.
frontend
is calling backend
through the sidecar proxies.There are two sets of ports for a sidecar proxy: upstream ports and public ports. Upstream ports are used by the local service to route to its upstream dependencies. In this example, 9091 is an upstream port.
Public ports are the ports that incoming traffic from other sidecar proxies will be received on. In Figure 3-9 that port for backend
’s sidecar proxy isn’t shown because it’s dynamically allocated by Consul. It can be hardcoded via the port
setting: connect { sidecar_service { port = 21000 }}
.
In this chapter, you learned how to deploy services and add them to a Consul service mesh. On Kubernetes, this was a simple matter of annotating your Deployments with the consul.hashicorp.com/inject: true
annotation. Under the hood, Consul handled automatically adding in the sidecar proxy to each Pod.
On VMs, you first needed to install the services and run them using systemd
. Then you configured Consul via a configuration file so it knew about the services. Finally, you installed Envoy and started the sidecar proxies for each service, again using systemd
.
Now that you’ve got your services running in the Consul service mesh it’s time to learn about all the features a service mesh brings to the table. In the next chapter, we’ll focus on security. Specifically, we’ll look at encryption, authentication, and authorization.
1 Technically, a Deployment is made up of ReplicaSets which is then made up of Pods, but in practice you can ignore the ReplicaSets.