9

Deploying to Kubernetes

In this chapter, we will shift our focus to the Kubernetes hosting mode known as Dapr. First, we will learn how to prepare our sample projects so that they can be deployed in a containerized form, before moving on to preparing Dapr in the Azure Kubernetes Service cluster.

This chapter will help us gain visibility of the production-ready environment for a solution based on Dapr: a Kubernetes cluster.

In this chapter, we will cover the following topics:

  • Setting up Kubernetes
  • Setting up Dapr on Kubernetes
  • Deploying a Dapr application to Kubernetes
  • Exposing Dapr applications to external clients

As architects and developers, we usually mainly focus on defining and implementing a solution. However, it is important to understand the implications of the deployment options and how these can influence our design choices, as well as impact our overall solution architecture.

Our first objective is to provision the Kubernetes cluster and connect to it from our development environment.

Technical requirements

The code for this chapter can be found on GitHub at https://github.com/PacktPublishing/Practical-Microservices-with-Dapr-and-.NET-Second-Edition/tree/main/chapter09.

In this chapter, the working area for the scripts and code can be found at <repository path>chapter09. In my local environment, it can be found here: C:Reposdapr-sampleschapter09.

Please refer to the Setting up Dapr section of Chapter 1, Introducing Dapr, for a complete guide on the tools needed to develop with Dapr and work with the examples provided.

Setting up Kubernetes

While the discussion around microservice architectures has evolved independently, the concept of containerized deployments has propelled its popularity among developers and architects.

Once you start to have a multitude of microservices, each comprised of one or many containers, you soon realize you need a piece of software that can orchestrate these containers. In a nutshell, orchestration is the reason why Kubernetes is so relevant and frequently appears in the context of microservice architectures.

Important note

Kubernetes (k8s) is the most popular open source container orchestrator and is a project that’s maintained by the Cloud Native Computing Foundation (CNCF). To learn more about Kubernetes, I suggest that you read straight from the source at https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/.

In this section, we are going to provision an Azure Kubernetes Service (AKS) cluster. Even if it is not in this book’s scope to learn all the details of AKS, for those of you who are not already Kubernetes geeks, it will be helpful for you to become familiar with some of its concepts and tooling.

These are the steps we will be going through:

  1. Creating an Azure resource group
  2. Creating an AKS cluster
  3. Connecting to the AKS cluster

For your convenience, all the commands in this step and the following ones have been provided in the Deploysetup-AKS.ps1 file.

Let’s start by preparing a resource group.

Creating an Azure resource group

In a Windows terminal, log in to Azure by using the Azure CLI. We could also use the portal here but given that you will be using files from the repository we cloned locally in the next few sections, it might be easier for you to run the CLI locally. This helps us keep a consistent experience between Azure and Kubernetes.

Let’s connect to the subscription that we want to provision the cluster in. It will probably be the same Azure subscription you used in the previous chapters. Mine is named Sandbox:

PS C:Reposdapr-sampleschapter09> az login
PS C:Reposdapr-sampleschapter09> az account set –subscription "Sandbox"

All the commands that we will be issuing via the Azure CLI will be issued in the context of the specified Azure subscription.

PS C:Reposdapr-sampleschapter09> az group create –name
$askname  --location $location

In the previous command, we must create the resource group with the $name and $location parameters of our choice.

Creating an AKS cluster

Now, we can create the AKS cluster. Please check the walkthrough available at https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough for additional information.

In the following CLI command, we are choosing to enable the monitoring feature and prefer the use of VirtualMachineScaleSets (an Azure feature that lets you manage and scale nodes as a group) for the nodes rather than a simple VM:

PS C:Reposdapr-sampleschapter09> az aks create --resource-
group $rg --name $aksname `
    --node-count 3 --node-vm-size Standard_D2s_v3 `
    --enable-addons monitoring `
    --vm-set-type VirtualMachineScaleSets `
    --generate-ssh-keys

After waiting a few minutes for the cluster to be created, we can verify the status of the AKS cluster resource, named per our choice in variable $aksname, with the following command:

PS C:Reposdapr-sampleschapter09> az aks show --name
$aksname  --resource-groue $rg

Once we have successfully created the AKS cluster, we can connect to it.

Connecting to the AKS cluster

Once the cluster has been created, we need to install the Kubernetes tools on our development environment – namely, the kubectl CLI, which is facilitated by the Azure CLI – with the following command:

PS C:Reposdapr-sampleschapter09> az aks install-cli

With the Azure CLI, we can also retrieve the credentials we need to gain administrative access to the cluster. These credentials will be merged into the default location for the Kubernetes configuration file:

PS C:Reposdapr-sampleschapter09> az aks get-credentials
--name $aksname --resource-groue $rg
Merged "xyz" as current context in C:Usersuser.kubeconfig

We now have access to the cluster, and with the kubectl CLI, we can control any aspect of it. As a starting point, let’s examine the cluster’s composition:

PS C:Reposdapr-sampleschapter09> kubectl get nodes
NAME                       STATUS   ROLES   AGE   VERSION
aks-nodep
ool1-14504866-vmss000003   Ready    agent   11m   v1.21.9
aks-nodep
ool1-14504866-vmss000004   Ready    agent   11m   v1.21.9
aks-nodep
ool1-14504866-vmss000005   Ready    agent   11m   v1.21.9

According to the kubectl get nodes command, we have three nodes running, as specified in the AKS provisioning command.

From this point forward, the experience will have less to do with Azure and more to do with Dapr and Kubernetes. Due to this, it will apply to any suitable similar containerized environment, such as other cloud providers or edge/hybrid scenarios. I suggest getting familiar with the kubectl CLI documentation, which is available at https://kubernetes.io/docs/reference/kubectl/.

Additionally, you can install the client tools for Helm in your development environment. Helm is a package manager for Kubernetes, often used for more rich and complex solutions. To learn more, look at the documentation at https://helm.sh/docs/intro/install/.

In this section, we managed to create an AKS instance, install the necessary Kubernetes tools locally, and gain access to the cluster itself. The next step is to install Dapr in our AKS cluster.

Setting up Dapr on Kubernetes

At this stage, a Kubernetes cluster (specifically, AKS on Azure) is ready to accommodate our workload. We need to install Dapr before we can move on to the preparation phase for our applications.

In Chapter 1, Introducing Dapr, we used the dapr init CLI command to initialize Dapr in our development environment. We’ll use it again here to initialize Dapr in Kubernetes, but we will add the -k parameter:

PS C:Reposdapr-sampleschapter09> dapr init -k
Making the jump to hyperspace...
Note: To install Dapr using Helm, see here:  https://docs.dapr.
io/getting-started/install-dapr/#install-with-helm-advanced
Deploying the Dapr control plane to your cluster...
Success! Dapr has been installed to namespace dapr-system.
To verify, run `dapr status -k' in your terminal.
To get started, go here: https://aka.ms/dapr-getting-started

The preceding command installs and initializes the Dapr components in the cluster that corresponds to the current Kubernetes configuration.

We can verify that the Dapr service we learned about in Chapter 1, Introducing Dapr, is now present in the cluster by executing the following command in the Dapr CLI:

PS C:Reposdapr-sampleschapter09> dapr status -k
 NAME               NAMESPACE   HEALTHY
 STATUS  REPLICAS  VERSION  AGE  CREATED
 dapr-dashboard     dapr-system True    Running
 1         0.10.0   16d  …
 dapr-placement-server dapr-system True    Running
 1         1.8.4    16d  …
 dapr-sentry           dapr-system True    Running
 1         1.8.4    16d  …
 dapr-sidecar-injector dapr-system True    Running
 1         1.8.4    16d  …
 dapr-operator         dapr-system True    Running
 1          1.8.4    16d  …

The dapr status -k command is equal to querying the currently running Pods in the Kubernetes dapr-system namespace via the kubectl CLI:

PS C:Reposdapr-sampleschapter09> kubectl get pods -n
dapr-system
NAME                                               READY STATUS RESTARTS AGE
dapr-dashboard-78557d579c-ng vqt 1/1      Running    0                 2m4s
dapr-operator-74cdb-5fff9-7qt89 1/1    Running  0    2m4s
dapr-placement-7b5bbdd95c-d6kmw 1/1 Running 0 2m4s
dapr-sentry-65d64b7cd8-v7z9n 1/1 Running 0 2m4s
dapr-sidecar-injector-7759b8b9c4-whvph 1/1 Running 0 2m4s

From the Pods count in the preceding output, you can see that there is only one replica of each Pod from the Dapr system services.

However, this could change with the highly available deployment option of Dapr. For our development environment, it’s fine to have just one replica since our cluster has a reduced number of nodes. For this and other production guidelines on how to set up and operate Dapr on Kubernetes in a production environment, please see the documentation at https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production/.

Let’s explore the Kubernetes services provided by Dapr by running the following command in the context of the dapr-system namespace:

PS C:Reposdapr-sampleschapter09> kubectl get services -n
dapr-system
NAME                    TYPE        CLUSTER-IP     EXTER
NAL-IP   PORT(S)              AGE
dapr-api                Clus
terIP   10.0.123.133   <none>        80/TCP               23d
dapr-dashboard          Clus
terIP   10.0.206.23    <none>        8080/TCP             23d
dapr-placement-server   Clus
terIP   None           <none>        50005/TCP,8201/TCP   23d
dapr-sentry             Clus
terIP   10.0.143.67    <none>        80/TCP               23d
dapr-sidecar-injector   Clus
terIP   10.0.36.244    <none>        443/TCP              23d
dapr-webhook            Clus
terIP   10.0.166.229   <none>        443/TCP              23d

You can also gather the same information from the Azure portal, as shown in the following screenshot:

Figure 9.1 – Services by namespace in AKS

Figure 9.1 – Services by namespace in AKS

Here, upon inspecting the Kubernetes resources of the AKS cluster, you will see Services and ingresses that have been configured in the namespace.

We can also leverage the Dapr dashboard here:

PS C:Reposdapr-sampleschapter09> dapr dashboard -k
Dapr dashboard found in namespace:      dapr-system
Dapr dashboard available at:    http://localhost:8080

With the dapr dashboard -k parameter, we can open the Dapr dashboard that is running in the Kubernetes environment. All this is done inside the dapr-system namespace:

Figure 9.2 – The Dapr dashboard in Kubernetes hosting mode

Figure 9.2 – The Dapr dashboard in Kubernetes hosting mode

As a recap, you can find an overview of the various methods (kubectl, the Azure portal, the Dapr dashboard, and so on) we can use to gather feedback on the Dapr services running in Kubernetes by going to the following Kubernetes section of the Dapr documentation: https://docs.dapr.io/operations/hosting/kubernetes/.

The Kubernetes cluster is now ready to accept the deployment of a Dapr-enabled application.

Deploying a Dapr application to Kubernetes

The service code for our Dapr application is now complete. However, we must package it so that it can be deployed to Kubernetes appropriately. Our first objective is to publish these services as Docker containers.

The sample that’s available for this chapter, C:Reposdapr-sampleschapter09, is aligned with the status we reached at the end of Chapter 8, Using Actors. To recap, the following Dapr applications comprise our overall solution:

  • sample.microservice.order
  • sample.microservice.reservation.service
  • sample.microservice.reservationactor.service
  • sample.microservice.customization
  • sample.microservice.shipping

The preceding list represents the Dapr applications, each with its corresponding ASP.NET project, that we need to build as Docker images to deploy to Kubernetes later.

While this chapter also shows how to build our Dapr applications into Docker images and push those images into a private container registry, ready-to-use images are available on Docker Hub if you wish to expedite them:

  • davidebedin/sample.microservice.order, which corresponds to the sample.microservice.order application
  • davidebedin/sample.microservice.reservation, which corresponds to the sample.microservice.reservation.service application
  • davidebedin/sample.microservice.reservationactor, which corresponds to the sample.microservice.reservationactor.service application
  • davidebedin/sample.microservice.customization, which corresponds to the sample.microservice.customization application
  • davidebedin/sample.microservice.shipping, which corresponds to the sample.microservice.shipping application

Even if you decide to use the readily available container images, I suggest reading the instructions in the next two sections to get an overview of the process to build and push container images. Deployment scripts for leveraging the images on Docker Hub are available in this chapter’s working area.

What is Docker Hub?

Docker Hub is the world’s largest repository of container images, a service provided by Docker for finding and sharing container images with everyone. You can learn more at https://hub.docker.com/.

Next, we will start building the Docker images.

Building Docker images

As we intend to deploy our sample application to Kubernetes, the Docker container is the deployment format we must (and will) use.

Important note

For more information on how to publish ASP.NET projects with the Docker container format, I suggest that you read the documentation at https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/building-net-docker-images?view=aspnetcore-6.0.

A Dockerfile is a text file that contains all the commands the Docker CLI needs to step through to build a Docker image. Let’s start by examining the Dockerfile in the sample.microservice.reservationactor.service application folder:

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["sample.microservice.reservationactor.interfaces/
sample.microservice.reservationactor.interfaces.csproj",
 "sample.microservice.reservationactor.interfaces/"]
COPY ["sample.microservice.reservationactor.service/
sample.microservice.reservationactor.service.csproj",
"sample.microservice.reservationactor.service/"]
RUN dotnet restore "sample.microservice.reservationactor.
interfaces/sample.microservice.reservationactor.interfaces.
csproj"
RUN dotnet restore "sample.microservice.reservationactor.
service/sample.microservice.reservationactor.service.csproj"
COPY . .
WORKDIR "/src/sample.microservice.reservationactor.service"
RUN dotnet publish "sample.microservice.reservationactor.
service.csproj" -c Release -o /app/publish --no-restore
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "sample.microservice.reservationactor.
service.dll"]
… omitted …

In the previous Dockerfile, there are several stages in the build process. Here, we are using two separate base images:

  • mcr.microsoft.com/dotnet/aspnet:6.0 is an image containing the runtime and libraries for ASP.NET. It is optimized for running ASP.NET applications in production.
  • mcr.microsoft.com/dotnet/sdk:6.0 contains the .NET CLI, in addition to its runtime, and is suitable for building ASP.NET projects.

First, a stage based on the mcr.microsoft.com/dotnet/sdk:6.0 image with the build alias is used as a destination for copying the involved projects, restoring the dependencies with dotnet restore, and then publishing them with dotnet publish.

In another, final, stage based on the mcr.microsoft.com/dotnet/aspnet:6.0 base image, the content of the output of publish is copied from the build image into the root folder; ENTRYPOINT is declared to run the dotnet command on the project library once the container has been started.

The Deployuild-container.ps1 file has been prepared to build all the container images for our microservices. At this stage, let’s examine the parameters of the docker build command:

… omitted …
$container = "sample.microservice.order"
$latest = "{0}/{1}:{2}" -f $registry, $container, $default
$versioned = "{0}/{1}:{2}" -f $registry, $container,
$buildversion
docker build . -f .sample.microservice.orderDockerfile -t
$latest -t $versioned –build-arg BUILD_DATE=$builddate –build-
arg BUILD_VERSION=$buildversion
… omitted …

In the previous snippet, the docker build command with the -f parameter references the Dockerfile of the Dapr application in the project folder, with -t setting two tags for the image while following the registry/container:tag pattern. Here, registry could be the name of a container registry, such as Azure Container Registry (ACR), or the username in Docker Hub. Finally, the --build-arg parameters specify some arguments to be used in the Dockerfile itself.

Let’s execute this docker build command directly in the command line:

PS C:Repospractical-daprchapter09> docker build . -f .
sample.microservice.reservationactor.serviceDockerfile -t
sample.microservice.reservationactor.service:2.0
…
=> [build 1/9] FROM mcr.microsoft.com/dotnet/sdk:6.0@sha256:
fde93347d1cc74a03f1804f113ce85add00c6f0af15881181165ef04b
c76bd00                     0.0s
 => [stage-1 1/3] FROM mcr.microsoft.com/dotnet/aspnet:6.0@
sha256:431d21f51d76da537d305827e791d23bfcf4aebfa019c12ee8e14df
b71c64cca                0.0s
…
 => [build 7/9] COPY . .                                                          
0.2s
 => [build 8/9] WORKDIR /src/sample.microservice.reservation
actor.service                                             0.1s
 => [build 9/9] RUN dotnet publish "sample.microservice.
reservationactor.service.csproj" -c Release -o /app/publish
--no-restore                  6.5s
 => [stage-1 3/3] COPY --from=build /app/publish           0.1s
…
 => => writing image sha256:98ada5ed54d4256392a
fac9534745b4efe7d483ff597501bfef30c6645edb557             0.0s
 => => naming to docker.io/sample.microservice.reservation
actor.service:2.0   /sample.microservice.reservationactor.
service:2.0

In the previous output, most of which has been omitted for brevity, it’s worth noting that each of our Dockerfile instructions is evaluated as a numbered step. In the end, we have our ENTRYPOINT.

The build process with Docker must be performed for each of the Dapr applications we intend to deploy to Kubernetes.

We should now have our images built and available in our development machine. At this point, we need to publish them to a container image registry so that we can use them from our Kubernetes cluster.

Pushing Docker images

In this section, we are going to run our Dapr applications on the Kubernetes cluster. For this, we need to push the container images from our local environment to a location that can be accessed by the Kubernetes cluster.

A Kubernetes cluster can retrieve the images from a container registry, which usually offers a public and/or private container repository. A container repository is a collection of different versions of a container:

  • Docker Hub is a container registry for private or public repositories
  • Azure Container Registry (ACR) is a private repository for containers running on Azure
  • Other container registry options may be available in private and public spaces

I decided to use ACR because it fits well with the overall Azure-focused scenario. If you want, you can publish the sample containers to public or private repositories on Docker Hub, or any other registry.

For your convenience, the commands in this section can be found in the Deploysetup-ACR-AKS.ps1 file; please remember to adapt the parameters to your specific case.

For a walkthrough on how to create an ACR instance, please take a look at the documentation provided at https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal.

Important note

VS Code offers a rich developer experience when it comes to building Docker containers and pushing them to remote registries. Here, it makes sense to leverage these features while learning about something new, but it is highly recommended to integrate the container build’s process into a CI/CD pipeline while leveraging GitHub, Azure DevOps, or any other platform suitable for the task.

As a starting point, I suggest that you read this article by my colleague Jessica Tibaldi on a CI/CD approach to Dapr: https://www.linkedin.com/pulse/s01e02-dapr-compass-how-we-organized-work-jessica-tibaldi/.

If we examine our local environment, we should note that we have some new Docker container images. The following is a screenshot from VS Code:

Figure 9.3 – Local images and ACR in VS Code

Figure 9.3 – Local images and ACR in VS Code

On the left-hand side of the preceding screenshot, we can see the images that are known by our Docker local environment. All the container images we just built are present in this list, starting with the sample.microservice.reservationactor.service image that we analyzed in the previous section.

On the right-hand side of the preceding screenshot, we can see the destination of the container images: an ACR instance.

The Azure extension in VS Code offers an integrated developer experience by easing authentication and access to our Azure resources: we can push the Docker images we just built to the ACR instance with just a click. You can find a step-by-step guide on how to do this at https://code.visualstudio.com/docs/containers/quickstart-container-registries.

If we look at the integrated terminal windows in VS Code, we will see that the docker push command has been executed:

> Executing task: docker push <ACR repository name>/sample.
microservice.reservationactor.service:latest <
The push refers to repository [dapracrdb.azurecr.io/sample.
microservice.reservationactor.service]
d513a9a5f622: Layer already exists
afc798cc7710: Layer already exists
049b0fdaa27c: Layer already exists
87e08e237115: Layer already exists
1915427dc1a4: Layer already exists
8a939c4fd477: Layer already exists
d0fe97fa8b8c: Layer already exists
latest: digest: sha256:c6ff3b4058c98728318429ca49f0f8df0
635f7efdfe34b6ccc7624fc3cea7d1e size: 1792

Some commands issued by the VS Code extension haven’t been shown in the preceding output for brevity. With docker tag, an alias has been assigned that matches the ACR address. So, here, our locally built sample.microservice.reservationactor.service can be referred to as a combination of <ACR repository name>/<Docker image name><image name>:<tag>.

Each of the images we’ve built for the Dapr applications should now be pushed to the remote ACR instance.

Once all the container images are available in the registry, we can deploy the Dapr applications to Kubernetes. Before that, we need to connect ACR to AKS.

From a terminal window, we can launch the following command with the Azure CLI, given we have set values for the AKS cluster, $aksname, the resource group, $rg, and the name of our ACR in a variable, $acrname:

PS C:Repospractical-daprchapter09> az aks update --name
$aksname --resource-group $rg --attach-acr $acrname

Because we’ve launched the command in our Azure login context, it executes it. This means it has the correct access rights on both AKS and ACR.

In the next section, we will learn how to pass secrets to Dapr components in Kubernetes.

Managing secrets in Kubernetes

Secrets such as passwords, connection strings, and keys should always be kept separate from the rest of the code, as well as the configuration of a solution. This is because they could compromise its security if they’re shared inappropriately.

Secret management is another building block of Dapr: integration is possible with many secret stores, such as Azure Key Vault, Hashicorp Vault, and Kubernetes itself. A full list is available at https://docs.dapr.io/developing-applications/building-blocks/secrets/howto-secrets/.

A Dapr application can retrieve secrets by invoking the Dapr API, which can be reached by issuing the following command:

GET http://localhost:<port>/v1.0/secrets/<vault>/<secret>

We can also use secrets to configure Dapr components, as documented at https://docs.dapr.io/operations/components/component-secrets/. In our sample solution, we are using an Azure Service Bus for the common pub/sub component and an Azure Cosmos DB for the state store of all the components. These rely on keys and connection strings that we need to keep secure.

At this point, we need a secret store. Instead of creating another Azure resource, we can reach a compromise between complexity and another learning opportunity by adopting the Kubernetes built-in secret store, which is readily available via kubectl.

To create a secret via the kubectl CLI, we can use the kubectl create secret syntax, as described in the Deploydeploy-secrets.ps1 file. This is shown in the following code snippet:

kubectl create secret generic cosmosdb-secret --from-
literal=masterKey='#secret#' --from-literal=url='#secret#'
kubectl create secret generic servicebus-secret --from-literal=
connectionString='#secret#'

We can use the secrets from the Dapr component’s .yaml files for this. For instance, the following file at Deploycomponent-state-reservationitem.yaml is the actor state store component, which is mainly used by the reservationactor-service Dapr application:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: reservationitemactorstore
  namespace: default
spec:
  type: state.azure.cosmosdb
  version: v1
  metadata:
  - name: url
    secretKeyRef:
      name: cosmosdb-secret
      key: url
  - name: masterKey
    secretKeyRef:
      name: cosmosdb-secret
      key: masterKey
  - name: database
    value: state
  - name: collection
    value: reservationitemactorstate
  - name: actorStateStore
    value: "true"

As you can see, instead of values being directly written in the component-state-reservationitem.yaml file for metadata, we are referencing the key’s url and masterKey components from the secret; that is, cosmosdb-secret.

We must reference the secrets from all the Dapr component’s configuration files. Once we’ve done that, we can start deploying our Dapr applications.

Deploying applications

So far, we have managed to push the Docker images for our Dapr applications to ACR.

As you may recall from the previous chapters, in Dapr’s standalone hosting mode, we launched an application with the Dapr CLI, like this:

dapr run --app-id "reservationactor-service" --app-port "5004"
--dapr-grpc-port "50040" --dapr-http-port "5014" --components-
path components" -- dotnet run --project ./sample.microservice.
reservationactor.service/sample.microservice.reservationactor.
service.csproj --urls="http://+:5004"

In Kubernetes hosting mode, we will deploy the Dapr application with a .yaml file, configuring all the parameters present in the previous snippet.

The Deploysample.microservice.reservationactor.yaml file that corresponds to the reservationactor-service Dapr application looks as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: reservationactor-service
  namespace: default
  labels:
    app: reservationactor-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: reservationactor-service
  template:
    metadata:
      labels:
        app: reservationactor-service
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "reservationactor-service"
        dapr.io/app-port: "80"
        dapr.io/log-level: "info"
    spec:
      containers:
      - name: reservationactor-service
        image: <registry>/sample.microservice.
          reservationactor:2.0
        ports:
        - containerPort: 80
        imagePullPolicy: Always

The previous snippet is a standard configuration file for a Kubernetes deployment. <registry>/sample.microservice.reservationactor, where <registry> must be replaced with the ACR you provisioned previously, specifies the use of a container image with the 2.0 tag. The Dapr-specific portion is all in the metadata annotations: the dapr.io/enabled: "true" annotation informs the Dapr control plane operating in Kubernetes that the Dapr sidecar container must be injected into this Pod, granting it the ability to access the Dapr building blocks. The dapr.io/id: "reservationactor-service" annotation specifies the Dapr application’s unique ID.

Many other Dapr annotations can be used in Kubernetes to influence deployments. For an exhaustive list, please check the documentation at https://docs.dapr.io/reference/arguments-annotations-overview/.

This is the overall, and minimal, impact Dapr has on the deployment configuration of microservices destined to be deployed on Kubernetes.

Docker Hub ready-to-use images

Ready-to-use container images for each of the sample Dapr applications are available on Docker Hub, and the corresponding .yaml files and scripts are available in this chapter’s working area. As an example, the Deploysample.microservice.reservationactor.dockerhub.yaml file contains the deployment that corresponds to the one we just analyzed.

Armed with the necessary components and deployment files, we are ready to deploy the solution to Kubernetes.

To make this step simpler, two files have been provided for you in C:Reposdapr-sampleschapter09:

  • The Deploydeploy-solution.ps1 file can be used if you are leveraging your own ACR to keep the application container images safe. Before using it, remember to replace the variables, in the referenced files, with the values that apply to your situation.
  • The Deploydeploy-solution.dockerhub.ps1 file is leveraging the images on Docker Hub.

As an example, looking at the Deploydeploy-solution.ps1 file, the commands are meant to be executed from the solution’s root folder, which in my case is C:Reposdapr-sampleschapter09:

kubectl apply -f .Deploycomponent-pubsub.yaml --namespace
$namespace
kubectl apply -f .Deploysample.microservice.order.yaml
--namespace $namespace

The first command applies the configuration of a Dapr component to a namespace in our Kubernetes cluster. The second command applies the deployment manifest for a Dapr application to the same namespace.

We can check the impact that the new deployment has on Kubernetes from the Azure portal via kubectl or the Dapr dashboard. With dapr dashboard -k, we can see that all our Dapr applications have been configured in Kubernetes, as shown in the following screenshot:

Figure 9.4 – The Dapr Applications view in the Dapr dashboard

Figure 9.4 – The Dapr Applications view in the Dapr dashboard

Here, we can see the list of Dapr applications that are available, shown by their Dapr application IDs.

In the following screenshot, we can see the Dapr Components view in the Dapr dashboard:

Figure 9.5 – The Dapr Components view in the Dapr dashboard

Figure 9.5 – The Dapr Components view in the Dapr dashboard

Before we move on to the next section, let’s verify the output of the Dapr application.

By using the kubectl logs -f command, we can follow the output log from any app and container, in a similar fashion to what we did in the local development environment via the Dapr CLI:

PS C:Reposdapr-sampleschapter09> kubectl logs -l
app=reservationactor-service -c reservationactor-service
--namespace default -f
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app/

The preceding output comes from the ASP.NET containers in the Pods for the deployment that corresponds to the reservationactor-service Dapr application: for the time being, we can only see log events related to its startup.

You might be wondering, how can we reach the API exposed by order-service to test it? No one can access it yet; I just wanted you to know how to read logs first. Before we move on, we need to learn how to expose our ASP.NET, Dapr-enabled application in Kubernetes. We’ll do this in the next section.

Exposing Dapr applications to external clients

At this stage of deploying the Biscotti Brutti Ma Buoni backend solution to Kubernetes, we have all the Dapr components and applications properly configured. However, without the proper configurations, no external calls can reach any of our service APIs.

Our objective is to expose the ASP.NET endpoints of the Dapr applications, starting with order-service, so that we can invoke the /order API method from our client machine. The following diagram shows what we are trying to achieve:

Figure 9.6 – The main components of a Kubernetes deployment

Figure 9.6 – The main components of a Kubernetes deployment

In the preceding diagram, the main Dapr services are depicted in Kubernetes alongside our Dapr applications. They are represented as Pods containing the ASP.NET container, along with the service code and the Dapr sidecar.

We need to configure our Kubernetes cluster with an ingress controller (IC). For this, we can use NGINX. A detailed step-by-step configuration is available in the Azure documentation at https://docs.microsoft.com/en-us/azure/aks/ingress-basic.

For your convenience, the Deploydeploy-nginx.ps1 file contains all the instructions for preparing the NGINX IC and the ingresses according to our sample solution. The first step is to install NGINX via Helm:

helm install nginx-ingress ingress-nginx/ingress-nginx `
    --namespace $namespace `
    --set controller.replicaCount=2 `

Once the deployment has been completed, we can verify that the IC is ready with the following command:

PS C:Reposdapr-sampleschapter09> kubectl --namespace default
get services -o wide -w nginx-ingress-ingress-nginx-controller
NAME   TYPE  CLUSTER-IP EXTERNAL-IP PORT(S) AGE  SELECTOR
nginx-ingress-ingress-nginx-controller   LoadBalancer   
10.0.21.122   x.y.z.k   80:32732/TCP,443:30046/TCP   41h
app.kubernetes.io/component=controller,app.kubernetes.io/
instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx

Let’s take note of the public IP (the x.y.z.k address) of the IC from the EXTERNAL-IP column.

Once the IC has been deployed, we must configure an ingress that corresponds to each service we want to expose.

By examining the Deployservices.yaml configuration file, we can observe the configuration of resources of the (kind) Service type for order-service, followed by reservation-service:

apiVersion: v1
kind: Service
metadata:
  name: order-service
  namespace: default
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: order-service
---
… omitted …

The ingresses for order-service and reservation-service are available in the Deployingress-nginx.yaml file, as shown here:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: order-service-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$1/$2
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /bbmb/(order)/(.*)
        pathType: Prefix
        backend:
          service:
            name: order-service
            port:
              number: 80
---
… omitted …

In the previous configuration snippet, we can see that a resource of the Ingress type has been defined. In the spec field, we can find everything related to NGINX: a path is defined to identify requests to route to the backend service named order-service. A similar ingress is defined to reach the reservation-service service.

To apply these configurations to our Kubernetes environment, we must use the following command:

kubectl apply -f .Deployservices.yaml
kubectl apply -f .Deployingress-nginx.yaml

At this point, we should get an output that looks similar to the following:

Figure 9.7 – AKS ingress resources view

Figure 9.7 – AKS ingress resources view

Here, we can see the two ingresses of the nginx class: order-service-ingress and reservation-service-ingress. While I obfuscated the Address details of my deployment, it is the same for the IC we obtained previously.

We can run a manual test by invoking the HTTP endpoint via cURL or the browser and following the http://<FQDN or IP>/bbmb/order/ path to reach, via the order-service service, the corresponding ASP.NET application. The same applies to http://<FQDN or IP>/bbmb/balance/ for the reservation-service service.

Even better, by leveraging VS Code extension and the tests prepared in the order.test.http file, we can retrieve the balance for a cookie and send an order, as shown in the following screenshot:

Figure 9.8 – Manual tests in VS Code

Figure 9.8 – Manual tests in VS Code

Again, if we retrieve the log from the reservationactor-service Dapr application after we invoke it a few times, we should see more logs than we did previously:

PS C:Reposdapr-sampleschapter09> kubectl logs -l
app=reservationactor-service -c reservationactor-service –
namespace default -f
Activating actor id: rockiecookie
Balance of rockiecookie was 587, now 584
Activating actor id: bussola86
Balance of bussola86 was -3, now -5

Here, we can see the Dapr virtual actors being activated, and their balance quantity being updated. This is the result of an order that was submitted to the order-service application, which started the saga orchestration we described in Chapter 6, Publish and Subscribe, helping it reach the reservationactor-service application we introduced in Chapter 8, Actors.

With our IC and ingress configured, we have gained access to the ASP.NET endpoints from outside the AKS cluster, even though we had to configure the services for each Dapr application we wish to expose.

Summary

This book introduced you to the fictitious e-commerce site Biscotti Brutti Ma Buoni. In the previous chapters, we built prototypes for several microservices. Along the way, we learned how the building blocks of Dapr enable any developer, using any language on any platform, to accelerate the development and deployment of a microservice architecture.

We have stayed focused on the building blocks of Dapr and how to combine them optimally, always remaining in the context of the local development environment. We did this by relying on Dapr’s standalone mode to test and debug our microservice code.

In this chapter, we finally shifted gears and moved toward a production-ready environment for our Dapr applications, such as a Kubernetes cluster. We learned how to configure Dapr on a Kubernetes cluster, as well as how to handle secrets and components, deploy applications, and configure ingress.

While we managed to expose our ASP.NET applications to external clients, it is important to note that we accomplished this with a generic Kubernetes approach, without levering any of the capabilities of Dapr.

In the next chapter, we will learn about a more interesting approach to integrating Dapr with ingress controllers in Kubernetes and how to integrate an API gateway with Dapr.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.163.207