© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
N. VermeirIntroducing .NET 6https://doi.org/10.1007/978-1-4842-7319-7_9

9. Application Architecture

Nico Vermeir1  
(1)
Merchtem, Belgium
 

Together with .NET 6 came tooling to help developers build better architectures. Projects like Dapr and example project like eShop On Containers help tremendously with building well-designed and well-architected platforms.

So where can .NET 6 help in building great architectures? There are a few concepts in .NET that help simplify some things; but not to worry, .NET is not pushing you into any direction. We still have full flexibility to architect our applications however we see fit. What we do have is numerous syntax concepts that help keep our code small and readable.

Record Types

The quickest win is Data Transfer Objects, or DTOs. DTOs are a representation of an entity that will be passed over the wire over a data transfer protocol such as HTTP. DTOs are important because they prevent leaking nonpublic, internal information about entities to foreign systems. In most cases, they are a basic class containing auto-properties that map in a straightforward way onto the entity that they represent. Listing 9-1 shows an example of an entity.
public class Event : Entity, IAggregateRoot
{
    private readonly List<Attendee.Attendee> _attendees;
    public string Title { get; private set; }
    public DateTime StartDate { get; private set; }
    public DateTime EndDate { get; private set; }
    public decimal Price { get; private set; }
    public int? AddressId { get; private set; }
    public Address Address { get; private set; }
    public virtual IReadOnlyCollection<Attendee.Attendee> Attendees => _attendees;
    public Event(string title, DateTime startDate, DateTime endDate, decimal price)
    {
        _attendees = new List<Attendee.Attendee>();
        Title = title;
        StartDate = startDate;
        EndDate = endDate;
        Price = price;
    }
    public void SetAddress(Address address)
    {
        AddressId = address.Id;
        Address = address;
    }
}
Listing 9-1

The entity

This is a very basic example of an entity from an application build using the Domain-Driven-Design principles. It inherits from Entity, which has an Id property to give us a uniquely identifiable property, and it is an IAggregateRoot, which means that this object can be stored and retrieved on its own. Entities who are not an IAggregateRoot are not meant to exist by themselves; they depend on other objects to be a member of.

Let’s say we need to fetch a list of events to show in our frontend; not using DTOs would mean that we could possibly fetch hundreds of events with all Attendee and Address details, while maybe all we want to do is show a list of upcoming events. To simply, list all events that would be too much data. Instead, we use a DTO to simplify the object that goes over the wire according to the use case we need.

Listing 9-2 shows an example what a DTO could look like for when we want a list of events.
public class EventForList
{
    public int Id { get; set; }
    public string Title { get; set; }
    public DateTime StartDate { get; set; }
    public DateTime EndDate { get; set; }
}
Listing 9-2

DTO for listing events

Way less data to send over the wire, and just enough. When needing to fetch the details for an Event, we of course need another DTO containing all the info an event detail page might need. You may realize that this can become tedious quite fast, writing DTO after DTO, mapping them to the entity, and so on. A neat compiler trick that came with .NET 5 can help speed this process up; that trick is called records. Listing 9-3 shows the DTO from Listing 9-2 again but written as a record.
public record ActivityForListRecord (int Id, string Title, DateTime StartDate, DateTime EndDate);
Listing 9-3

DTO as a record

That is one line of code to replace all the auto-properties. A record is a shorthand for writing a class, but there is more to it. Equality, for example, in a normal class, two variables of the same reference type are equal when they point to the same reference. With a record, they are equal when they have the same value. In this case, a class that only contains properties. Another difference is that a record is immutable. The complete documentation on records can be found here https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/record.

The values between the brackets are not parameters; they are properties, hence the Pascal Casing. As for the output, records are nothing more than a clever compiler trick, a pinch of syntactic sugar. Listing 9-4 compares the intermediate language definition of the EventForList class with the EventForList record. I have renamed them EventForListClass and EventForListRecord for convenience.
.class public auto ansi beforefieldinit ActivityForListRecord
    extends [System.Runtime]System.Object
.class public auto ansi beforefieldinit ActivityForListClass
    extends [System.Runtime]System.Object
Listing 9-4

IL output for a record and a class

As you can see, the outputs are identical, meaning records are known to the C# compiler but not to the runtime.

New to the C# language since C# 10 is value-type records. Up until now, records could only be reference types, classes. C# 10 introduces record structs which are value-type records. Listing 9-5 shows the earlier record example as a value type; notice the struct keyword.
public record struct ActivityForListRecord (int Id, string Title, DateTime StartDate, DateTime EndDate);
Listing 9-5

Value-type records

Let’s have a look at the IL again. Listing 9-6 shows the generated IL code.
.class public sequential ansi sealed beforefieldinit ActivityForListRecord extends [System.Runtime]System.ValueType
.class public auto ansi beforefieldinit ActivityForListClass
    extends [System.Runtime]System.Object
Listing 9-6

IL output for a record struct and a class

Struct records follow the same rules as normal structs. Structs are often used because they are cheaper and memory-wise because they are value-typed. This often results in better performance. They do have limitations when compared to classes, for example, structs don’t allow inheritance. A major difference between records and record structs is that record structs are not immutable by default; they can be if we mark them as readonly.

Monolith Architecture

Monolith applications are applications that contain everything in one or two services. Usually a frontend and a backend. Before Microservices, which we will talk about next, monoliths were very common. Figure 9-1 describes what a monolith architecture looks like.
Figure 9-1

Monolith architecture

In this example, we have a web client and a mobile client; both speak to the same API that in turn is connected to a data store. Depending on the size of the application, this API can potentially be huge. Let’s say there is one part of the API that is seeing intense usage and is slowing the entire API down. To solve this, we would need to scale the entire API or move it to a server with more power. Even worse, the entire system can go down because of a bottleneck in one place.

Another disadvantage of monolith services is maintainability. One big service containing all business logic is hard to maintain or even to keep an overview of what is where in the source code.

However, not everything is bad about monolith architecture. Depending on the size and complexity of your application, this might still be the right choice for you as microservices create extra layers of complexity besides the advantages they bring.

Microservices

Microservice architecture is a variation on service-oriented architecture. Creating a Microservices-based application means that the backend is split up into different loosely coupled services. Each service has its own responsibility and has no knowledge of the other services. Communication between services usually happens over a message bus. To prevent applications having to implement multiple endpoints, we can implement a gateway per application or type of application should we need to. That gateway knows the endpoints of the Microservices the application needs. Figure 9-2 shows a high-level architecture schema for a Microservices-based application.
Figure 9-2

Microservices architecture

There is a lot to like about a Microservices-oriented architecture. The split responsibilities mean that we can scale the parts where scaling is needed instead of just pumping more memory into the virtual server. We can create gateways per client so that only the absolute necessary parts of the backend platform are exposed and so on. It also brings with it added complexity and cost; since each service is basically its own application, we need a lot of application servers; all of those servers need to be maintained. Even if we went with a container orchestration system like Kubernetes, we get extra overhead, and exactly this is the danger of overengineering or over-architecting an application. Microservices are a great architecture pattern, but they are not the silver bullet for all applications; depending on your use case, a monolith application might be just fine.

Microservices work great in a Domain-Driven-Design (DDD) or Clean Architecture (CA) scenario. The scope of a microservice can, in most cases, map to a bounded context. Domain-Driven-Design and Clean Architecture are widely popular design patterns for enterprise applications. They both give the domain model responsibility for changes and nicely decouple read and write requests. Both are really great patterns to add to your arsenal as a developer.

A bounded context is a functional block of your application that can be isolated. For example, the orders of a webshop can contain products, customers, purchases, and so on. That isolated block of orders functionality can be a bounded context. However, just like with Microservices, DDD and CA have their place in larger applications. Don’t overengineer; use the right tool for the job instead of using a sledgehammer to drive a nail in a wooden board.

If you are interested in learning more about Clean Architecture or Domain-Driven-Design, I can advise you to take a look at the e-book of eshop on containers or the Practical Event-Driven Microservices Architecture book available from Apress.

Container Orchestration

We have talked about containers, specifically Docker-based containers, in the ASP.NET chapter. Containers and Microservices are a great match, if there is an orchestrator. A container orchestrator is a tool that manages a set of different container images and how they relate to each other. Can they communicate? Over what port? Which containers get exposed outside of the cluster? And so on. The most common orchestrators are Kubernetes and Docker Compose.

Kubernetes

Kubernetes , or k8s for short (https://kubernetes.io), is a container orchestrator. It can automatically deploy and scale your containerized applications. A set of containers deployed on a Kubernetes instance is called a cluster. To explore the capabilities of Kubernetes, I can advise you to install Minikube via https://minikube.sigs.k8s.io/. Minikube is a local Kubernetes cluster installation that you can use for development. It is available for Windows, Linux, and Mac OS. The installer and install instructions can be downloaded at https://minikube.sigs.k8s.io/docs/start/.
Figure 9-3

Running Minikube on WSL2

Once Minikube is installed, we can use the Kubernetes CLI through the kubectl command.
Figure 9-4

Kubernetes CLI

Time for some terminology. Kubernetes is a cluster consisting of Nodes. Nodes are actual machines, virtual or physical servers, that have Kubernetes installed and are added to the cluster. Running kubectl gets nodes list the available nodes in the cluster; a local installation of Minikube is a cluster with one node.
Figure 9-5

Nodes in a Minikube cluster

One of the nodes is the control plane: the node that controls the cluster. Communication to and from the control plane happens over the Kubernetes API.

A deployed container on a node is called a Pod. For this example, we will create a Pod from one of the services in eShop On Containers. eShop On Containers is an open source reference architecture by Microsoft; it can be found at https://github.com/dotnet-architecture/eShopOnContainers. The reason we are using this as an example is because the eShop is a container-ready Microservices architecture. It fits quite right with the topic we are dealing with at the moment.

Time to create a Pod. Listing 9-7 shows the command to create a deployment on our local Kubernetes cluster.
kubectl create deployment apresseshop --image=eshop/catalog.api
Listing 9-7

Creating a new deployment to Kubernetes

The deployment, and the pod, gets created. The image will start pulling in the background and the container will spin up when ready. To check the status of the nodes, we can use kubectl get pods.
Figure 9-6

1 Pod running on local cluster

Of course, this is a very basic example and complete overkill of what Kubernetes is intended for. As a more elaborate example, I have deployed the entire eShop On Container example on my local cluster.
Figure 9-7

Deploying a larger Kubernetes cluster

I didn’t have to manually create each container that would defeat the purpose of a container orchestrator. Instead, the project contains yaml files that Kubernetes can use to deploy and configure a set of services.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: catalog
  labels:
    app: catalogApi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: catalog
  template:
    metadata:
      labels:
        app: catalog
    spec:
      containers:
        - name: catalog
          imagePullPolicy: IfNotPresent
          image: eshop/catalog.api
Listing 9-8

An example Kubernetes file

Listing 9-8 shows an example of a Kubernetes file that spins up a pod of the catalog API.

Docker Compose

Docker Compose is a popular alternative to Kubernetes. It is more designed to work on a single node, while Kubernetes really shines in big enterprise, multi-server environments. This also means that the learning curve for Docker Compose is much smaller. Using Docker Compose is simple: make sure your applications have their own Docker file, create a docker-compose.yml, and run the up command on the Docker Compose CLI. We should already have Docker installed since we have installed Docker Desktop in the previous chapter. Docker is packaged together with Docker Desktop on Windows. On Linux it can be installed through Python’s package manager PIP or by downloading the binary from GitHub. Detailed instructions can be found in the Docker Compose documentation https://docs.docker.com/compose/install/. Listing 9-9 shows a simple example of a Docker Compose file using two of the eshop images.
version: "3"
services:
  catalogapi:
    container_name: catalogApi
    image: eshop/catalog.api
    restart: unless-stopped
  webmvc:
    container_name: webmvc
    image: eshop/webmvc
    restart: unless-stopped
Listing 9-9

Example of Docker Compose file

To run this, we execute docker-compose up in a command line window.
Figure 9-8

Running two containers in Docker Compose

From this point on, the CLI will start printing the debug output from the different running containers. If you want to run your containers in the background, you can use the -d flag.
docker-compose up -d
Listing 9-10

Running Docker Compose in the background

The Docker Compose file can be further expanded by adding volumes for persistent storage or network capabilities; all the information on how to do that can be found at the official Docker Compose documentation.

Dapr

The Distributed Application Runtime (Dapr) provides APIs that simplify microservice connectivity. The complete documentation for Dapr is found at https://docs.dapr.io/. It is a Microsoft-owned open-source project that can help simplify the management of large distributed systems. Consider it a “Microservices toolkit.” Dapr provides capabilities such as service-to-service communication, state management, publish/subscribe messaging pattern, observables, secrets, and so on. All these capabilities are abstracted away by Dapr’s building blocks. Dapr by itself is large enough to fill an entire book; what I want to do here is give you an idea of what Dapr is about so you can determine for yourself if you can use it in your project.

Installing Dapr

First step is installing the Dapr CLI. Dapr provides scripts that download the binaries and updates path variables. The easiest way is to execute the script in Listing 9-11. Other ways to install can be found on https://docs.dapr.io/getting-started/install-dapr-cli/.
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
Listing 9-11

Installing Dapr CLI

Once the CLI is installed, we need to initialize our Dapr environment by calling dapr init on the command line. Make sure to have Docker installed before Dapr, as Dapr relies on containers to get its components up and running locally.
Figure 9-9

Setting up Dapr

Once initialized, we can find some new containers running in our local Docker setup.
Figure 9-10

Dapr containers running on Docker

Now we have everything set up, we can get to work. Dapr works according to the sidecar pattern. Meaning that we don’t have to include all components and code in our own application; we only need to make Dapr API calls that go to the sidecar that is attached to our application. That sidecar abstracts all logic away from us.

The sidecar pattern is a design pattern where components of an application are deployed into separate processes or containers. This provides isolation and encapsulation.

Dapr State Management

Let’s use the Dapr state management component as an example. State management in Dapr is done by default through Redis Cache. Dapr abstracts the logic of setting up Redis and calling its APIs away from us. We only need to call Dapr APIs to get state management up and running.

For this example, I have created a blank Console application using .NET 6.
using Dapr.Client;
const string storeName = "daprstate";
const string key = "counter";
var daprClient = new DaprClientBuilder().Build();
var counter = await daprClient.GetStateAsync<int>(storeName, key);
while (true)
{
    Console.WriteLine($"Counter state: {counter++}");
    await daprClient.SaveStateAsync(storeName, key, counter);
    await Task.Delay(1000);
}
Listing 9-12

Calling Dapr state management

We need to add the Dapr.Client NuGet package to the project and make sure Dapr is up and running. Once everything is set up correctly, we can start Dapr and run our .NET 6 application inside the Dapr environment with the Redis sidecar. Listing 9-13 shows the command that we can use to launch our application.
dapr run --app-id DaprCounter dotnet run
Listing 9-13

Launching the application using Dapr CLI

The output will be the counter increasing. If we stop and relaunch the application, you will notice that the counter did not start from zero again; it saved its state in Redis across restarts.
Figure 9-11

The Dapr sidecar model

This was just one very simple example of Dapr. The major advantage is that Dapr takes a bunch of components and principles and bundles them into one developer model. We only need to develop against the Dapr API; everything else is handled by the runtime.

Wrapping Up

.NET has always been a framework that promotes good, clean architectures, and it continues that trend with .NET 6. Open-source reference projects like eShop On Containers help guide developers and application architects in finding the best architecture for their projects. Frameworks like Dapr can help ease the struggles of managing all the different building blocks in distributed applications. But as always, there is no one-size-fits-all. Look at the project you want to build from a higher, abstracter place, and choose the right architecture for the job. Not everything is suited for a complex DDD setup; don’t overengineer but keep things simple.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.68.18