5

Applying a Microservice Architecture to Your Enterprise Application

This chapter is dedicated to describing highly scalable architectures based on small modules called microservices. The microservice architecture allows for fine-grained scaling operations where every single module can be scaled as required without it affecting the remainder of the system. Moreover, they allow for better Continuous Integration/Continuous Deployment (CI/CD) by permitting every system subpart to evolve and be deployed independently of the others.

In this chapter, we will cover the following topics:

  • What are microservices?
  • When do microservices help?
  • How does .NET deal with microservices?
  • Which tools are needed to manage microservices?

By the end of this chapter, you will have learned how to implement a single microservice in .NET. Chapter 6, Azure Kubernetes Service, also explain how to deploy, debug, and manage a whole microservices-based application. Chapter 14, Implementing microservices with .NET and Chapter 16, Implementing Frontend Microservices with ASP.NET Core are step-by-step guides to the practical implementation of microservices with .NET.

Technical requirements

In this chapter, you will require the following:

  • Visual Studio 2022 free Community Edition or better with all the database tools installed.
  • A free Azure account. The Creating an Azure account section in Chapter 1, Understanding the Importance of Software Architecture, explains how to create one.
  • Docker Desktop for Windows if you want to debug Docker containerized microservices in Visual Studio (https://www.docker.com/products/docker-desktop).

In turn, Docker Desktop for Windows requires at least Windows 10 with either WSL (Windows Subsystem for Linux) or Windows Containers installed.

WSL enables Docker containers to run on a Linux virtual machine and can be installed as follows:

  1. Type powershell in the Windows 10 search bar.
  2. When Windows PowerShell is proposed as a search result, click on Run as an administrator.
  3. In the Windows PowerShell administrative console that appears, run the command wsl --install.

Windows Containers enables Docker containers to run directly in Windows but requires at least the Windows Professional edition. It can be installed as follows:

  1. Type Windows features in the Windows 10 search bar.
  2. The search results will propose running the panel for enabling/disabling Windows features.
  3. Click on it, and in the window that opens, select Containers.

What are microservices?

Microservice architectures allow each module that makes up a solution to be scaled independently from the others to achieve the maximum throughput with minimal cost. In fact, scaling whole systems instead of their current bottlenecks inevitably results in a remarkable waste of resources, so fine-grained control of subsystem scaling has a considerable impact on the system’s overall cost.

However, microservices are more than scalable components – they are software building blocks that can be developed, maintained, and deployed independently of each other. Splitting development and maintenance among modules that can be independently developed, maintained, and deployed improves the overall system’s CI/CD cycle (the CI/CD concept was explained in more detail in the Organizing your work using Azure DevOps section in Chapter 3, Documenting Requirements with Azure DevOps).

The CI/CD improvement is due to microservice independence because it enables the following:

  • Scaling and distributing microservices on different types of hardware.
  • Since each microservice is deployed independently from the others, there can’t be binary compatibility or database structure compatibility constraints. Therefore, there is no need to align the versions of the different microservices that compose the system. This means that each of them can evolve, as needed, without being constrained by the others.

    However, attention must be paid to the choice of communication protocols and messages, and to their versions, which must be supported by all involved microservices. Protocols that are widely supported and that facilitate backward compatibility with previous versions of messages should be preferred.

  • Assigning their development to completely separate smaller teams, thus simplifying job organization and reducing all the inevitable coordination inefficiencies that arise when handling large teams.
  • Implementing each microservice with more adequate technologies and in a more adequate environment, since each microservice is an independent deployment unit. This means choosing tools that best fit your requirements and an environment that minimizes development efforts and/or maximizes performance.
  • Since each microservice can be implemented with different technologies, programming languages, tools, and operating systems, enterprises can use all available human resources by matching environments with developers’ competencies. For instance, all available Java and .NET developers can cooperate in the same application, thus exploiting all available resources.
  • Legacy subsystems can be embedded in independent microservices, thus enabling them to cooperate with newer subsystems. This way, companies may reduce the time to market of new system versions. Moreover, this way, legacy systems can evolve slowly toward more modern systems with an acceptable impact on costs and the organization.

The next subsection explains how the concept of microservices was conceived. Then, we will continue this introductory section by exploring basic microservice design principles and analyzing why microservices are often designed as Docker containers.

Microservices and the evolution of the concept of modules

For a better understanding of the advantages of microservices, as well as their design techniques, we must keep the two-folded nature of software modularity, and of software modules, in mind:

  • Code modularity refers to code organization that makes it easy for us to modify a chunk of code without affecting the remainder of the application. It is usually enforced with object-oriented design, where modules can be identified with classes.
  • Deployment modularity depends on what your deployment units are and which properties they have. The simplest deployment units are executable files and libraries. Thus, for instance, dynamic link libraries (DLLs) are, for sure, more modular than static libraries since they must not be linked with the main executable before being deployed.

While the fundamental concepts of code modularity have reached stasis, the concept of deployment modularity is still evolving and microservices are currently state of the art along this evolution path.

As a short review of the main milestones on the path that led to microservices, we can say that, first, monolithic executables were broken into static libraries. Later on, DLLs replaced static libraries.

A great change took place when .NET (and other analogous frameworks, such as Java) improved the modularity of executables and libraries. In fact, with .NET, they can be deployed on different hardware and on different operating systems since they are deployed in an intermediary language that is compiled when the library is executed for the first time. Moreover, they overcome some versioning issues of previous DLLs since any executable can bring with it a DLL with a version that differs from the version of the same DLL that is installed on the operating system.

However, .NET can’t accept two referenced DLLs – let’s say, A and B – using two different versions of a common dependency – let’s say, C. For instance, suppose there is a newer version of A with many new features we would like to use that, in turn, relies on a newer version of C that is not supported by B. In this situation, we should renounce the newer version of A because of the incompatibility of C with B. This difficulty has led to two important changes:

  • The development world moved from DLLs and/or single files to package management systems such as NuGet and npm, which automatically check version compatibility with the help of semantic versioning.
  • Service-Oriented Architecture (SOA). Deployment units started being implemented as SOAP and then as REST web services. This solves the version compatibility problem since each web service runs in a different process and can use the most adequate version of each library with no risk of causing incompatibilities with other web services. Moreover, the interface that is exposed by each web service is platform-agnostic; that is, web services can connect with applications using any framework and run on any operating system since web service protocols are based on universally accepted standards. SOAs and protocols will be discussed in more detail in Chapter 13, Applying Service-Oriented Architectures with .NET.

Microservices are an evolution of SOA and add more features and more constraints that improve the scalability and the modularity of services to improve the overall CI/CD cycle. It’s sometimes said that microservices are SOA done well.

Microservices design principles

To sum things up, the microservice architecture is an SOA that maximizes independence and fine-grained scaling. Now that we’ve clarified all the advantages of microservice independence and fine-grained scaling, as well as the very nature of independence, we are in a position to look at microservice design principles.

Let’s start with principles that arise from the independence constraint. We will discuss them each in a separate subsection.

The independence of design choices

The design of each microservice must not depend on the design choices that were made in the implementation of other microservices. This principle enables the full independence of each microservice CI/CD cycle and leaves us with more technological choices on how to implement each microservice. This way, we can choose the best available technology to implement each microservice.

Another consequence of this principle is that different microservices can’t connect to the same shared storage (database or filesystem) since sharing the same storage also means sharing all the design choices that determined the structure of the storage subsystem (database table design, database engine, and so on). Thus, either a microservice has its own data storage or it has no storage at all and communicates with other microservices that take care of handling storage.

Here, having dedicated data storage doesn’t mean that the physical database is distributed within the process boundary of the microservice itself, but that the microservice has exclusive access to a database or set of database tables that are handled by an external database engine. In fact, for performance reasons, database engines must run on dedicated hardware and with OS and hardware features that are optimized for their storage functionalities.

Usually, independence of design choices is interpreted in a lighter form by distinguishing between logical and physical microservices. More specifically, a logical microservice is implemented with several physical microservices that use the same data storage but that are load-balanced independently. That is, the logical microservice is designed as a logical unit and then split into more physical microservices to achieve better load balance.

Independence from the deployment environment

Microservices are scaled out on different hardware nodes and different microservices can be hosted on the same node. Therefore, the less a microservice relies on the services offered by the operating system and on other installed software, the more available hardware nodes it can be deployed on. More node optimization can also be performed.

This is the reason why microservices are usually containerized and use Docker. Containers will be discussed in more detail in the Containers and Docker subsection of this chapter, but basically, containerization is a technique that allows each microservice to bring its dependencies with it so that it can run anywhere.

Loose coupling

Each microservice must be loosely coupled with all the other microservices. This principle has a two-fold nature. On the one hand, this means that, according to object-oriented programming principles, the interface that’s exposed by each microservice must not be too specific, but as general as possible. However, it also means that communications among microservices must be minimized in order to reduce communication costs since microservices don’t share the same address space and run on different hardware nodes.

No chained requests/responses

When a request reaches a microservice, it must not cause a recursive chain of nested requests/responses to other microservices since a similar chain would result in an unacceptable response time. Chained requests/responses can be avoided if the private data models of all the microservices synchronize with push events each time they change. In other words, as soon as the data that’s handled by a microservice changes, those changes are sent to all the microservices that may need them to serve their requests. This way, each microservice has all the data it needs to serve all its incoming requests in its private data storage, with no need to ask other microservices for the data that it lacks.

Figure 5.1 shows how updates are sent to all interested microservices as soon as they are produced, and how each microservice combines all received updates in a local database. This way, each query microservice has all the data it needs to answer queries in its local database.

Figure 5.1: Push events

In conclusion, every microservice must contain all the data it needs to serve incoming requests and ensure fast responses. To keep their data models up to date and ready for incoming requests, microservices must communicate their data changes as soon as they take place. These data changes should be communicated through asynchronous messages since synchronous nested messages cause unacceptable performance because they block all the threads involved in the call tree until a result is returned.

It is worth pointing out that the Independence of design choices principle is substantially the bounded context principle of domain-driven design, which we will talk about in detail in Chapter 11, Understanding the Different Domains in Software Solutions. In this chapter, we will see that, often, a full domain-driven design approach is useful for the update subsystem of each microservice.

It’s not trivial that, in general, all systems that have been developed according to the bounded context principle are better implemented with a microservice architecture. In fact, once a system has been decomposed into several completely independent and loosely coupled parts, it is very likely that these different parts will need to be scaled independently because of different traffic and different resource requirements.

At the preceding constraints, we must also add some best practices for building a reusable SOA. More details on these best practices will be given in Chapter 13, Applying Service-Oriented Architectures with .NET, but nowadays, most SOA best practices are automatically enforced by tools and frameworks that are used to implement web services.

Fine-grained scaling requires that microservices are small enough to isolate well-defined functionalities, but this also requires a complex infrastructure that takes care of automatically instantiating microservices, allocating instances on various hardware computational resources, commonly called nodes, and scaling them as needed. These kinds of structures will be introduced in the Which tools are needed to manage microservices? section of this chapter, and discussed in detail in Chapter 6, Azure Kubernetes Service.

Moreover, fine-grained scaling of distributed microservices that communicate through asynchronous communication requires each microservice to be resilient. In fact, communication that’s directed to a specific microservice instance may fail due to a hardware fault or for the simple reason that the target instance was killed or moved to another node during a load balancing operation.

Temporary failures can be overcome with exponential retries. This is where we retry the same operation after each failure with a delay that increases exponentially until a maximum number of attempts is reached. For instance, first, we would retry after 10 milliseconds, and if this retry operation results in a failure, a new attempt is made after 20 milliseconds, then after 40 milliseconds, and so on.

On the other hand, long-term failures often cause an explosion of retry operations that may saturate all system resources in a way that is similar to a denial-of-service attack. Therefore, usually, exponential retries are used together with a circuit break strategy: after a given number of failures, a long-term failure is assumed and access to the resource is prevented for a given time by returning an immediate failure without attempting the communication operation.

It is also fundamental that the congestion of some subsystems, due to either failure or to a request peak, does not propagate to other system parts, in order to prevent overall system congestion. Bulkhead isolation avoids congestion propagation in the following ways:

  • Only a maximum number of similar simultaneous outbound requests are allowed; let’s say, 10. This is similar to putting an upper bound on thread creation.
  • Requests exceeding the previous bound are queued.
  • If the maximum queue length is reached, any further requests result in exceptions being thrown to abort them.

Retry policies may make it so that the same message is received and processed several times because the sender has received no confirmation that the message has been received or simply because it has timed out the operation, while the receiver actually received the message. The only possible solution to this problem is designing all messages so that they’re idempotent, that is, designing messages in such a way that processing the same message several times has the same effect as processing it once.

Updating a database table field to a value, for instance, is an idempotent operation since repeating it once or twice has exactly the same effect. However, incrementing a decimal field is not an idempotent operation. Microservice designers should make an effort to design the overall application with as many idempotent messages as possible.

The remaining non-idempotent messages must be transformed into idempotent ones in the following way, or with some other similar techniques:

  • Attach both a time and some identifier that uniquely identify each message.
  • Store all the messages that have been received in a dictionary that’s been indexed by the unique identifier attached to the message mentioned in the previous point.
  • Reject old messages.
  • When a message that may be a duplicate is received, verify whether it’s contained in the dictionary. If it is, then it has already been processed, so reject it.
  • Since old messages are rejected, they can be periodically removed from the dictionary to avoid it growing exponentially.

In Chapter 14, Implementing Microservices with .NET, we will use this technique, and we will discuss in more detail communication and coordination problems.

It is worth pointing out that some message brokers, such as Azure Service Bus, offer facilities for implementing the technique described previously. However, the receiver must always be able to recognize duplicate messages since, due to time-outs in the reception of acknowledgements, messages might be resent. Azure Service Bus is discussed in the .NET communication facilities subsection.

In the next subsection, we will talk about microservice containerization based on Docker.

Containers and Docker

We’ve already discussed the advantages of having microservices that don’t depend on the environment where they run: better hardware usage, the ability to mix legacy software with newer modules, the ability to mix several development stacks in order to use the best stack for each module implementation, and so on. Independence from the hosting environment can be easily achieved by deploying each microservice with all its dependencies on a private virtual machine.

However, starting a virtual machine with its private copy of the operating system takes a lot of time, and microservices must be started and stopped quickly to reduce load balancing and fault recovery costs. In fact, new microservices may be started either to replace faulty ones or because they were moved from one hardware node to another to perform load balancing. Moreover, adding a whole copy of the operating system to each microservice instance would be an excessive overhead.

Luckily, microservices can rely on a lighter form of technology: containers. Containers are a kind of light virtual machine. They do not virtualize a full machine – they just virtualize the OS filesystem level that sits on top of the OS kernel. They use the OS of the hosting machine (kernel, DLLs, and drivers) and rely on the OS’s native features to isolate processes and resources to ensure an isolated environment for the images they run.

As a consequence, containers are tied to a specific operating system, but they don’t suffer the overhead of copying and starting a whole OS in each container instance.

On each host machine, containers are handled by a runtime that takes care of creating them from images and creating an isolated environment for each of them. The most popular container image format is Docker, which is a de facto standard for container images.

Images contain files needed to create each container and specify which container resources, such as communication ports, to expose outside of the container. However, they need not explicitly contain all involved files, since they can be layered. This way, each image is built by adding new files and configuration information on top of another existing image that is referenced from inside the newly defined image.

For instance, if you want to deploy a .NET application as a Docker image, it is enough to just add your software and files to your Docker image and then reference an already existing .NET Docker image.

To allow for easy image referencing, images are grouped into registries that may be either public or private. They are similar to NuGet or npm registries. Docker offers a public registry (https://hub.docker.com/_/registry) where you can find most of the public images you may need to reference in your own images. However, each company can define private registries. For instance, Microsoft offers Azure Container Registry, where you can define your private container registry service: https://azure.microsoft.com/en-us/services/container-registry/. There, you can also find most of the .NET-related images you might need to reference in your code.

Before instantiating each container, the Docker runtime must solve all the recursive references. This cumbersome job is not performed each time a new container is created since the Docker runtime has a cache where it stores the fully assembled images that correspond to each input image and that it’s already processed.

Since each application is usually composed of several modules to be run in different containers, a tool called Docker Compose also allows .yml files, known as composition files, that specify the following information:

  • Which images to deploy.
  • How the internal resources that are exposed by each image must be mapped to the physical resources of the host machine. For instance, how communication ports that are exposed by Docker images must be mapped to the ports of the physical machine.

We will analyze Docker images and .yml files in the How does .NET deal with microservices? section of this chapter.

The Docker runtime handles images and containers on a single machine but, usually, containerized microservices are deployed and load-balanced on clusters that are composed of several machines. Clusters are handled by pieces of software called orchestrators. Orchestrators will be introduced in the Which tools are needed to manage microservices? section of this chapter, and described in detail in Chapter 6, Azure Kubernetes Service.

Now that we have understood what microservices are, what problems they can solve, and their basic design principles, we are ready to analyze when and how to use them in our system architecture. The next section analyzes when we should use them.

When do microservices help?

The answer to this question requires us to understand the roles microservices play in modern software architectures. We will look at this in the following two subsections:

  • Layered architectures and microservices
  • When is it worth considering microservice architectures?

Let’s start with a detailed look at layered architectures and microservices.

Layered architectures and microservices

Enterprise systems are usually organized in logical independent layers. The first layer is the one that interacts with the user and is called the presentation layer, while the last layer takes care of storing/retrieving data and is called the data layer. Requests originate in the presentation layer and pass through all the layers until they reach the data layer, and then come back, traversing all the layers in reverse until they reach the presentation layer, which takes care of presenting the results to the user/client. Layers can’t be jumped.

Each layer takes data from the previous layer, processes it, and passes it to the next layer. Then, it receives the results from its next layer and sends them back to its previous layer. Also, thrown exceptions can’t jump layers – each layer must take care of intercepting all the exceptions and either solving them somehow or transforming them into other exceptions that are expressed in the language of its previous layer. The layer architecture ensures the complete independence of the functionalities of each layer from the functionalities of all the other layers.

For instance, we can change the database engine without affecting all the layers that are above the data layer. In the same way, we can completely change the user interface, that is, the presentation layer, without affecting the remainder of the system.

Moreover, each layer implements a different kind of system specification. The data layer takes care of what the system must remember, the presentation layer takes care of the system-user interaction protocol, and all the layers that are in the middle implement the domain rules, which specify how data must be processed (for instance, how an employee paycheck must be computed). Typically, the data and presentation layers are separated by just one domain rule layer, called the business or application layer.

Figure 5.2: Layers

Each layer speaks a different language: the data layer De-italicize the language of the chosen storage engine, the business layer speaks the language of domain experts, and the presentation layer speaks the language of users. So, when data and exceptions pass from one layer to another, they must be translated into the language of the destination layer.

A detailed example of how to build a layered architecture will be given in the Use case – understanding the domains of the use case section in Chapter 11, Understanding the Different Domains in Software Solutions, which is dedicated to domain-driven design.

That being said, how do microservices fit into a layered architecture? Are they adequate for the functionalities of all the layers or just some layers? Can a single microservice span several layers?

The last question is the easiest to answer: yes! In fact, we’ve already stated that microservices should store the data they need within their logical boundaries. Therefore, there are microservices that span the business and data layers. Some others take care of encapsulating shared data and remain confined in the data layer. Thus, we may have business layer microservices, data layer microservices, and microservices that span both layers. So, what about the presentation layer?

The presentation layer

The presentation layer can also fit into a microservice architecture if it is implemented on the server side. Single-page applications and mobile applications run the presentation layer on the client machine, so they either connect directly to the business microservices layer or, more often, to an API gateway that exposes the public interface and takes care of routing requests to the right microservices.

In a microservice architecture, when the presentation layer is a website, it can be implemented with a set of microservices. However, if it requires heavy web servers and/or heavy frameworks, containerizing them may not be convenient. This decision must also consider the loss of performance that happens when containerizing the web server and the possible need for hardware firewalls between the web server and the remainder of the system.

ASP.NET Core is a lightweight framework that runs on the light Kestrel web server, so it can be containerized efficiently and used as-is in the worker microservices that are behind the frontend microservices.

Instead, frontend and/or high-traffic websites have more compelling security and load balancing requirements that can be satisfied just with fully-featured web servers. Accordingly, architectures based on microservices usually offer specialized components that take care of interfacing with the outside world. For instance, in Chapter 6, Azure Kubernetes Service, we will see that in Kubernetes clusters, this role is played by so-called ingresses. Ingresses are based on fully-featured web servers and each of them may route its traffic to several kestrel-based frontend microservices.

Monolithic websites can be easily broken into load-balanced smaller subsites without microservice-specific technologies, but a microservice architecture can bring all the advantages of microservices into the construction of a single HTML page. More specifically, different microservices may take care of different areas of each HTML page. In this case, we speak of micro-frontends.

When the HTML is created on the server side, the various micro-frontends create HTML chunks that are combined either on the server side or directly in the browser.

When, instead, the HTML is created directly on the client, each micro-frontend provides a different chunk of code to the client. These code chunks are run on the client machine, and each of them takes care of different pages/page areas. We will speak more of this kind of micro-frontends in Chapter 17, Blazor WebAssembly.

Now that we’ve clarified which parts of a system can benefit from the adoption of microservices, we are ready to state the rules when it comes to deciding how they’re adopted.

When is it worth considering microservice architectures?

Microservices can improve the implementation of both the business and data layer, but their adoption has some costs:

  • Allocating instances to nodes and scaling them has a cost in terms of cloud fees or internal infrastructures and licenses.
  • Splitting a unique process into smaller communicating processes increases communication costs and hardware needs, especially if the microservices are containerized.
  • Designing and testing software for a microservice requires more time and increases engineering costs, both in time and complexity. In particular, making microservices resilient and ensuring that they adequately handle all possible failures, as well as verifying these features with integration tests, can increase the development time by more than one order of magnitude.

So, when are microservices worth the cost of using them? Are there functionalities that must be implemented as microservices?

A rough answer to the second question is: yes, when the application is big enough in terms of traffic and/or software complexity. In fact, as an application grows in complexity and its traffic increases, it’s recommended that we pay the costs connected to scaling it since this allows for more scaling optimization and better handling when it comes to the development team. The costs we pay for these would soon exceed the cost of microservice adoption.

Thus, if fine-grained scaling makes sense for our application, and if we are able to estimate the savings that fine-grained scaling and development give us, we can easily compute an overall application throughput limit that makes the adoption of microservices convenient.

Microservice costs can also be justified by the increase in the market value of our products/services. Since the microservice architecture allows us to implement each microservice with a technology that has been optimized for its use, the quality that’s added to our software may justify all or part of the microservice costs.

However, scaling and technology optimizations are not the only parameters to consider. Sometimes, we are forced to adopt a microservice architecture without being able to perform a detailed cost analysis.

If the size of the team that takes care of the CI/CD of the overall system grows too much, the organization and coordination of this big team cause difficulties and inefficiencies. In this type of situation, it is desirable to move to an architecture that breaks the whole CI/CD cycle into independent parts that can be taken care of by smaller teams.

Moreover, since these development costs are only justified by a high volume of requests, we probably have high traffic being processed by independent modules that have been developed by different teams. Therefore, scaling optimizations and the need to reduce interaction between development teams make the adoption of a microservice architecture very convenient.

From this, we may conclude that, if the system and the development team grow too much, it is necessary to split the development team into smaller teams, each working on an efficient bounded context subsystem. It is very likely that, in a similar situation, a microservice architecture is the only possible option.

Another situation that forces the adoption of a microservice architecture is the integration of newer subparts with legacy subsystems based on different technologies since containerized microservices are the only way to implement an efficient interaction between the legacy system and the new subparts in order to gradually replace the legacy subparts with newer ones. Similarly, if our team is composed of developers with experience in different development stacks, an architecture based on containerized microservices may become a must.

In the next section, we will analyze building blocks and tools that are available for the implementation of .NET-based microservices.

How does .NET deal with microservices?

The new .NET, which evolved from .NET Core, was conceived as a multi-platform framework that was light and fast enough to implement efficient microservices. In particular, ASP.NET Core is the ideal tool for implementing text-REST and binary gRPC APIs to communicate with a microservice, since it can run efficiently with light web servers such as Kestrel and is itself light and modular.

The whole .NET stack evolved with microservices as a strategic deployment platform in mind and has facilities and packages for building efficient and light HTTP and gRPC communication to ensure service resiliency and to handle long-running tasks. The following subsections describe some of the different tools or solutions that we can use to implement a .NET-based microservice architecture.

.NET communication facilities

Microservices need two kinds of communication channels:

  • The first is a communication channel to receive external requests, either directly or through an API gateway. HTTP is the usual protocol for external communication due to available web service standards and tools. .NET’s main HTTP/gRPC communication facility is ASP.NET Core since it’s a lightweight HTTP/gRPC framework, which makes it ideal for implementing web APIs in small microservices. We will describe ASP.NET apps in detail in Chapter 13, Applying Service-Oriented Architectures with .NET, which is dedicated to HTTP, and we will describe gRPC services in Chapter 14, Implementing Microservices with .NET. .NET also offers an efficient and modular HTTP client solution that is able to pool and reuse heavy connection objects. Also, the HttpClient class will be described in more detail in Chapter 13, Applying Service-Oriented Architectures with .NET.
  • The second is a different type of communication channel to push updates to other microservices. In fact, we have already mentioned that intra-microservice communication cannot be triggered by an ongoing request since a complex tree of blocking calls to other microservices would increase request latency to an unacceptable level. As a consequence, updates must not be requested immediately before they’re used and should be pushed whenever state changes take place. Ideally, this kind of communication should be asynchronous to achieve acceptable performance. In fact, synchronous calls would block the sender while they are waiting for the result, thus increasing the idle time of each microservice. However, synchronous communication that just puts the request in a processing queue and then returns confirmation of the successful communication instead of the final result is acceptable if communication is fast enough (low communication latency and high bandwidth). A publisher/subscriber communication would be preferable since, in this case, the sender and receiver don’t need to know each other, thus increasing the microservices’ independence. In fact, all the receivers that are interested in a certain type of communication merely need to register to receive a specific event, while senders just need to publish those events. All the wiring is performed by a service that takes care of queuing events and dispatching them to all the subscribers. The publisher/subscriber pattern will be described in more detail in Chapter 10, Design Patterns and .NET 6 Implementation, along with other useful patterns.

While .NET doesn’t directly offer tools that may help in asynchronous communication or client/server tools that implement publisher/subscriber communication, Azure offers a similar service with Azure Service Bus. Azure Service Bus handles both queued asynchronous communication through Azure Service Bus queues and publisher/subscriber communication through Azure Service Bus topics.

Once you’ve configured Azure Service Bus on the Azure portal, you can connect to it in order to send messages/events and to receive messages/events through a client contained in the Microsoft.Azure.ServiceBus NuGet package.

Azure Service Bus has two types of communication: queue-based and topic-based. In queue-based communication, each message that’s placed in the queue by a sender is removed from the queue by the first receiver that pulls it from the queue. Topic-based communication, on the other hand, is an implementation of the publisher/subscriber pattern. Each topic has several subscriptions and a different copy of each message sent to a topic can be pulled from each topic subscription.

The design flow is as follows:

  1. Define an Azure Service Bus private namespace.
  2. Get the root connection strings that were created by the Azure portal and/or define new connection strings with fewer privileges.
  3. Define queues and/or topics where the sender will send their messages in binary format.
  4. For each topic, define names for all the required subscriptions.
  5. In the case of queue-based communication, the sender sends messages to a queue and the receivers pull messages from the same queue. Each message is delivered to one receiver. That is, once a receiver gains access to the queue, it reads and removes one or more messages.
  6. In the case of topic-based communication, each sender sends messages to a topic, while each receiver pulls messages from its private subscription associated with that topic.

There are also other commercial and free open source alternatives to Azure Service Bus, such as NServiceBus, MassTransit, and Brighter. They enhance existing brokers (like Azure Service Bus itself) with higher-level functionalities.

There is also a free completely independent option that can be used also on on-premises platforms: RabbitMQ. It is free and open source and can be installed locally, on a virtual machine, or in a Docker container. Then, you can connect with it through the client contained in the RabbitMQ.Client NuGet package.

The functionalities of RabbitMQ are similar to the ones offered by Azure Service Bus but you have to take care of more implementation details, like serialization, reliable messages, and error handling, while Azure Service Bus takes care of all the low-level operations and offers you a simpler interface. However, there are clients that build a more powerful abstraction on top of RabbitMQ, like, for instance, EasyNetQ. The publisher/subscriber-based communication pattern used by both Azure Service Bus and RabbitMQ will be described in Chapter 10, Design Patterns and .NET 6 Implementation. RabbitMQ will be described in more detail in Chapter 14, Implementing Microservices with .NET.

Resilient task execution

Resilient communication and, in general, resilient task execution can be implemented easily with the help of a .NET library called Polly, whose project is a member of the .NET Foundation. Polly is available through the Polly NuGet package.

In Polly, you define policies, and then execute tasks in the context of those policies, as follows:

var myPolicy = Policy
  .Handle<HttpRequestException>()
  .Or<OperationCanceledException>()
  .RetryAsync(3);
....
....
await myPolicy.ExecuteAsync(()=>{
    //your code here
});

The first part of each policy specifies the exceptions that must be handled. Then, you specify what to do when one of those exceptions is captured. In the preceding code, the Execute method is retried up to three times if a failure is reported either by an HttpRequestException exception or by an OperationCanceledException exception.

The following is the implementation of an exponential retry policy:

var retryPolicy= Policy
    ...
    //Exceptions to handle here
    .WaitAndRetryAsync(6, 
        retryAttempt => TimeSpan.FromSeconds(Math.Pow(2,
            retryAttempt)));

The first argument of WaitAndRetryAsync specifies that a maximum of six retries is performed in the event of failure. The lambda function passed as the second argument specifies how much time to wait before the next attempt. In this specific example, this time grows exponentially with the number of attempts by a power of 2 (2 seconds for the first retry, 4 seconds for the second retry, and so on).

The following is a simple circuit breaker policy:

var breakerPolicy =Policy
    .Handle<SomeExceptionType>()
    .CircuitBreakerAsync (6, TimeSpan.FromMinutes(1));

After six failures, the task can’t be executed for 1 minute since an exception is returned.

The following is the implementation of the Bulkhead Isolation policy (see the Microservices design principles section for more information):

Policy
  .BulkheadAsync(10, 15)

A maximum of 10 parallel executions is allowed in the Execute method. Further tasks are inserted in an execution queue. This has a limit of 15 tasks. If the queue limit is exceeded, an exception is thrown.

For the Bulkhead Isolation policy to work properly and, in general, for every strategy to work properly, task executions must be triggered through the same policy instance; otherwise, Polly is unable to count how many executions of a specific task are active.

Policies can be combined with the Wrap method:

var combinedPolicy = Policy
  .Wrap(retryPolicy, breakerPolicy);

Polly offers several more options, such as generic methods for tasks that return a specific type, timeout policies, task result caching, the ability to define custom policies, and so on. It is also possible to configure Polly as part of an HttpClient definition in the dependency injection section of any ASP.NET Core and .NET application. This way, it is quite immediate to define resilient clients.

The link to the official Polly documentation is in the Further reading section of this chapter.

Using generic hosts

Each microservice may need to run several independent threads, each performing a different operation on requests received. Such threads need several resources, such as database connections, communication channels, specialized modules that perform complex operations, and so on. Moreover, all processing threads must be adequately initialized when the microservice is started and gracefully stopped when the microservice is stopped as a consequence of either load balancing or errors.

All of these needs led the .NET team to conceive and implement hosted services and hosts. A host creates an adequate environment for running several tasks, known as hosted services, and provides them with resources, common settings, and graceful start/stop.

The concept of a web host was mainly conceived to implement the ASP.NET Core web framework, but, with effect from .NET Core 2.1, the host concept was extended to all .NET applications.

At the time of writing this book, a Host is automatically created for you in any ASP.NET Core, Blazor, and Worker Service project. The simplest way to test .NET Host features is to select a Service -> Worker Service project.

Figure 5.3: Creating a Worker Service project in Visual Studio

All features related to the concept of a Host are contained in the Microsoft.Extensions.Hosting NuGet package.

Program.cs contains some skeleton code for configuring the host with a fluent interface, starting with the CreateDefaultBuilder method of the Host class. The final step of this configuration is calling the Build method, which assembles the actual host with all the configuration information we provided:

...
var myHost=Host.CreateDefaultBuilder(args)
 .ConfigureServices(services =>
    {
        //some configuration
        ...
    })
    .Build();
...

Host configuration includes defining the common resources, defining the default folder for files, loading the configuration parameters from several sources (JSON files, environment variables, and any arguments that are passed to the application), and declaring all the hosted services.

It is worth pointing out that ASP.NET Core and Blazor projects use methods that perform pre-configuration of the Host that includes several of the tasks listed previously.

Then, the host is started, which causes all the hosted services to be started:

await host.RunAsync();

The program remains blocked on the preceding instruction until the host is shut down. The host is automatically shut down when the operating system kills the process. However, the host can also be shut down manually either by one of the hosted services or externally by calling await host.StopAsync(timeout). Here, timeout is a time span defining the maximum time to wait for the hosted services to stop gracefully. After this time, all the hosted services are aborted if they haven’t been terminated. We will explain how a hosted service can shut down the host later on in this subsection.

When the thread containing host.RunAsync is launched from within another thread instead of Program.cs, the fact that the host thread is being shut down can be signaled by a cancellationToken passed to RunAsync:

await host.RunAsync(cancellationToken)

This way of shutting down is triggered as soon as the cancellationToken enters a canceled state by another thread.

By default, the host has a 5-second timeout for shutting down; that is, it waits 5 seconds before exiting once a shutdown has been requested. This time can be changed within the ConfigureServices method, which is used to declare hosted services and other resources:

var myHost = Host.CreateDefaultBuilder(args)
    .ConfigureServices(services =>
    {
        services.Configure<HostOptions>(option =>
        {
            option.ShutdownTimeout = System.TimeSpan.FromSeconds(10);
        });
        ....
        ....
        //further configuration
    })
    .Build();

However, increasing the host timeout doesn’t increase the orchestrator timeout, so if the host waits too long, the whole microservice is killed by the orchestrator.

If no cancellation token is explicitly passed to Run or RunAsync, a cancellation token is automatically generated and is automatically signaled when the operating system informs the application it is going to kill it. This cancellation token is passed to all hosted services to give them the opportunity to stop gracefully.

Hosted services are implementations of the IHostedService interface, whose only methods are StartAsync(cancellationToken) and StopAsync(cancellationToken).

Both methods are passed a cancellationToken. The cancellationToken in the StartAsync method signals that a shutdown was requested. The StartAsync method periodically checks this cancellationToken while performing all operations needed to start the host, and if it is signaled, the host start process is aborted. On the other hand, the cancellationToken in the StopAsync method signals that the shutdown timeout expired.

Hosted services can be declared in the same ConfigureServices method that’s used to define host options, as follows:

services.AddHostedService<MyHostedService>();

Most declarations inside ConfigureServices require the addition of the following namespace:

using Microsoft.Extensions.DependencyInjection;

Usually, the IHostedService interface isn’t implemented directly but can be inherited from the BackgroundService abstract class, which exposes the easier-to-implement ExecuteAsync(CancellationToken) method, which is where we can place the whole logic of the service. A shutdown is signaled by passing cancellationToken as an argument, which is easier to handle. We will look at an implementation of IHostedService in the example at the end of Chapter 14, Implementing Microservices with .NET.

To allow a hosted service to shut down the whole host, we need to declare an IApplicationLifetime interface as its constructor parameter:

public class MyHostedService: BackgroundService 
{
    private readonly IHostApplicationLifetime _applicationLifetime;
    public MyHostedService(IHostApplicationLifetime applicationLifetime)
    {
        _applicationLifetime=applicationLifetime;
    }
    protected Task ExecuteAsync(CancellationToken token) 
    {
        ...
        _applicationLifetime.StopApplication();
        ...
    }
}

When the hosted service is created, it is automatically passed an implementation of IHostApplicationLifetime, whose StopApplication method will trigger the host shutdown. This implementation is handled automatically, but we can also declare custom resources whose instances will be automatically passed to all the host service constructors that declare them as parameters. Therefore, say we define a constructor like this one:

Public MyClass(MyResource x, IResourceInterface1 y)
{
    ...
}

There are several ways to define the resources needed by the preceding constructor:

services.AddTransient<MyResource>();
services.AddTransient<IResourceInterface1, MyResource1>();
services.AddSingleton<MyResource>();
services.AddSingleton<IResourceInterface1, MyResource1>();

When we use AddTransient, a different instance is created and passed to all the constructors that require an instance of that type. On the other hand, with AddSingleton, a unique instance is created and passed to all the constructors that require the declared type. The overload with two generic types allows you to pass an interface and a type that implements that interface. This way, a constructor requires the interface and is decoupled from the specific implementation of that interface.

If resource constructors contain parameters, they will be automatically instantiated with the types declared in ConfigureServices in a recursive fashion. This pattern of interaction with resources is called dependency injection (DI) and will be discussed in detail in Chapter 10, Design Patterns and .NET 6 Implementation.

IHostBuilder also has a method we can use to define the default folder, that is, the folder used to resolve all relative paths mentioned in all .NET methods:

.UseContentRoot("c:\<deault path>")

It also has methods that we can use to add logging targets:

.ConfigureLogging((hostContext, configLogging) =>
    {
        configLogging.AddConsole();
        configLogging.AddDebug();
    })

The previous example shows a console-based logging source, but we can also log into Azure targets with adequate providers. The Further reading section contains links to some Azure logging providers that can work with microservices that have been deployed in Azure. Once you’ve configured logging, you can enable your hosted services and log custom messages by adding an ILoggerFactory or an ILogger<T> parameter in their constructors.

Finally, IHostBuilder has methods we can use to read configuration parameters from various sources:

.ConfigureAppConfiguration(configHost =>
    {
        configHost.AddJsonFile("settings.json", optional: true);
        configHost.AddEnvironmentVariables(prefix: "PREFIX_");
        configHost.AddCommandLine(args);
    })

The way parameters can be used from inside the application will be explained in more detail in Chapter 15, Presenting ASP.NET Core MVC, which is dedicated to ASP.NET.

Visual Studio support for Docker

Visual Studio offers support for creating, debugging, and deploying Docker images. Docker deployment requires us to install Docker Desktop for Windows on our development machine so that we can run Docker images. The download link can be found in the Technical requirements section at the beginning of this chapter. Before we start any development activity, we must ensure it is installed and running (you should see a Docker icon in the window notification bar when the Docker runtime is running).

Docker support will be described with a simple ASP.NET Core MVC project. Let’s create one. To do so, follow these steps:

  1. Name the project MvcDockerTest.
  2. For simplicity, disable authentication, if not already disabled.
  3. You are given the option to add Docker support when you create the project, but please don’t check the Docker support checkbox. You can test how Docker support can be added to any project after it has been created.

Once you have your ASP.NET MVC application scaffolded and running, right-click on its project icon in Solution Explorer and select Add and then Container Orchestrator Support | Docker Compose. Finally, in the window that appears, select Windows as the operating system.

If you installed both WSL and Windows Containers you might get a dialog asking you to pick what operating system your container should use; you can choose the one you prefer. This will enable not only the creation of a Docker image but also the creation of a Docker Compose project, which helps you configure Docker Compose files so that they run and deploy several Docker images simultaneously. In fact, if you add another MVC project to the solution and enable container orchestrator support for it, the new Docker image will be added to the same Docker Compose file.

The advantage of enabling Docker Compose instead of just Docker is that you can manually configure how the image is run on the development machine, as well as how Docker image ports are mapped to external ports by editing the Docker Compose files that are added to the solution.

If your Docker runtime has been installed properly and is running, you should be able to run the Docker image from Visual Studio.

Analyzing the Docker file

Let’s analyze the Docker file that was created by Visual Studio. It is a sequence of image creation steps. Each step enriches an existing image with something else with the help of the From instruction, which is a reference to an already existing image. The following is the first step:

FROM mcr.microsoft.com/dotnet/aspnet:x.x AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

The first step uses the mcr.microsoft.com/dotnet/aspnet:x.x ASP.NET (Core) runtime that was published by Microsoft in the Docker public repository (where x.x is the ASP.NET (Core) version that was selected in your project).

The WORKDIR command creates the directory that follows the command within the image that is going to be created. The two EXPOSE commands declare which ports will be exposed outside the image and mapped to ports of the actual hosting machine. Mapped ports are decided in the deployment stage either as command-line arguments of a Docker command or within a Docker Compose file. In our case, there are two ports: one for HTTP (80) and another for HTTPS (443).

This intermediate image is cached by Docker, which doesn’t need to recompute it since it doesn’t depend on the code we write but only on the selected version of the ASP.NET (Core) runtime.

The second step produces a different image that will not be used to deploy. Instead, it will be used to create application-specific files that will be deployed:

FROM mcr.microsoft.com/dotnet/core/sdk:6.0  AS build
WORKDIR /src
COPY ["MvcDockerTest/MvcDockerTest.csproj", "MvcDockerTest/"]
RUN dotnet restore MvcDockerTest/MvcDockerTest.csproj
COPY . .
WORKDIR /src/MvcDockerTest
RUN dotnet build MvcDockerTest.csproj -c Release -o /app/build
FROM build AS publish
RUN dotnet publish MvcDockerTest.csproj -c Release -o /app/publish

This step starts from the ASP.NET SDK image, which contains parts we don’t need to add for deployment; these are needed to process the project code. The new src directory is created in the build image and made the current image directory. Then, the project file is copied into /src/MvcDockerTest.

The RUN command executes an operating system command on the image. In this case, it calls the dotnet runtime, asking it to restore the NuGet packages that were referenced by the previously copied project file.

Then, the COPY.. command copies the whole project file tree into the src image directory. Finally, the project directory is made the current directory and the dotnet runtime is asked to build the project in release mode and copy all the output files into the new /app/build directory. Finally, the dotnet publish task is executed in a new image called publish, outputting the published binaries into /app/publish.

The final step starts from the image that we created in the first step, which contains the ASP.NET (Core) runtime, and adds all the files that were published in the previous step:

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MvcDockerTest.dll"]

The ENTRYPOINT command specifies the operating system command that’s needed to execute the image. It accepts an array of strings. In our case, it accepts the dotnet command and its first command-line argument, that is, the DLL we need to execute.

Publishing the project

If we right-click on our project and click Publish, we are presented with several options:

  • Publish the image to an existing or new web app (automatically created by Visual Studio)
  • Publish to one of several Docker registries, including a private Azure Container Registry that, if it doesn’t already exist, can be created from within Visual Studio

Docker Compose support allows you to run and publish a multi-container application and add further images, such as a containerized database that is available everywhere.

The following Docker Compose file adds two ASP.NET applications to the same Docker image:

version: '3.4'
services:
  mvcdockertest:
    image: ${DOCKER_REGISTRY-}mvcdockertest
    build:
      context: .
      dockerfile: MvcDockerTest/Dockerfile
  mvcdockertest1:
    image: ${DOCKER_REGISTRY-}mvcdockertest1
    build:
      context: .
      dockerfile: MvcDockerTest1/Dockerfile

The preceding code references existing Docker files. Any environment-dependent information is placed in the docker-compose.override.yml file, which is merged with the docker-compose.yml file when the application is launched from Visual Studio:

version: '3.4'
services:
  mvcdockertest:
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_URLS=https://+:443;http://+:80 
    ports:
      - "80"
      - "443"
    volumes:
      --${APPDATA}/Microsoft/UserSecrets:C:UsersContainerUserAppDataRoamingMicrosoftUserSecrets:ro
      - ${APPDATA}/ASP.NET/Https:C:UsersContainerUserAppDataRoamingASP.NETHttps:ro 
  mvcdockertest1:
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
      - ASPNETCORE_URLS=https://+:443;http://+:80
    ports:
      - "80"
      - "443"
    volumes:
      -${APPDATA}/Microsoft/UserSecrets:C:UsersContainerUserAppDataRoamingMicrosoftUserSecrets:ro
      - ${APPDATA}/ASP.NET/Https:C:UsersContainerUserAppDataRoamingASP.NETHttps:ro 

For each image, the file specifies some environment variables, which will be defined in the image when the application is launched, the port mappings, and some host files.

The files in the host are directly mapped into the images. Each declaration contains the path in the host, how the path is mapped in the image, and the desired access rights. In our case, volumes are used to map the self-signed HTTPS certificate that’s used by Visual Studio.

Now, suppose we want to add a containerized SQL Server instance. We would need something like the following instructions split between docker-compose.yml and docker-compose.override.yml:

sql.data:
  image: mssql-server-linux:latest
  environment:
  - SA_PASSWORD=Pass@word
  - ACCEPT_EULA=Y
  ports:
  - "5433:1433"

Here, the preceding code specifies the properties of the SQL Server container, as well as the SQL server’s configuration and installation parameters. More specifically, the preceding code contains the following information:

  • sql.data is the name that’s given to the container.
  • image specifies where to take the image from. In our case, the image is contained in a public Docker registry.
  • environment specifies the environment variables that are needed by SQL Server, that is, the administrator password and the acceptance of a SQL Server license.
  • As usual, ports specifies the port mappings.
  • docker-compose.override.yml is used to run the images from within Visual Studio.

If you need to specify parameters for either the production environment or the testing environment, you can add further docker-compose-xxx.override.yml files, such as docker-compose-staging.override.yml and docker-compose-production.override.yml, and then launch them manually in the target environment with something like the following code:

docker-compose -f docker-compose.yml -f docker-compose-staging.override.yml up

Then, you can destroy all the containers with the following code:

docker-compose -f docker-compose.yml -f docker-compose.staging.yml down

While docker-compose has a limited capability when it comes to handling node clusters, it is mainly used in testing and development environments. For production environments, more sophisticated tools are needed, as we will see later in this chapter (in the Which tools are needed to manage microservices? section).

Azure and Visual Studio support for microservice orchestration

Visual Studio has specific project templates for defining microservices to be deployed in Azure Kubernetes and has extensions for debugging a single microservice while it communicates with other microservices deployed in Azure Kubernetes.

Also available are tools for testing and debugging several communicating microservices in the development machine with no need to install any Kubernetes software, and for deploying them automatically on Azure Kubernetes with just minimal configuration information.

All Visual Studio tools for Azure Kubernetes will be described in Chapter 6, Azure Kubernetes Service.

Which tools are needed to manage microservices?

Effectively handling microservices in your CI/CD cycles requires both a private Docker image registry and a state-of-the-art microservice orchestrator that’s capable of doing the following:

  • Allocating and load-balancing microservices on available hardware nodes
  • Monitoring the health state of services and replacing faulty services if hardware/software failures occur
  • Logging and presenting analytics
  • Allowing the designer to dynamically change requirements such as hardware nodes allocated to a cluster, the number of service instances, and so on

The following subsection describes the Azure facilities we can use to store Docker images. The microservices orchestrators available in Azure are described in a dedicated chapter, namely, Chapter 6, Azure Kubernetes Service.

Defining your private Docker registry in Azure

Defining your private Docker registry in Azure is easy. Just type Container registries into the Azure search bar and select Container registries. On the page that appears, click on the Create button.

The following form will appear:

Figure 5.4: Creating an Azure private Docker registry

The name you select is used to compose the overall registry URI: <name>.azurecr.io. As usual, you can specify the subscription, resource group, and location. The SKU dropdown lets you choose from various levels of offerings that differ in terms of performance, available memory, and a few other auxiliary features.

Whenever you mention image names in Docker commands or in a Visual Studio publish form, you must prefix them with the registry URI: <name>.azurecr.io/<my imagename>.

If images are created with Visual Studio, then they can be published by following the instructions that appear once you’ve published the project. Otherwise, you must use docker commands to push them into your registry.

The easiest way to use Docker commands that interact with the Azure registry is by installing the Azure CLI on your computer. Download the installer from https://aka.ms/installazurecliwindows and execute it. Once the Azure CLI has been installed, you can use the az command from Windows Command Prompt or PowerShell. In order to connect with your Azure account, you must execute the following login command:

az login

This command should start your default browser and should drive you through the manual login procedure.

Once logged into your Azure account, you can log in to your private registry by typing the following command:

az acr login --name {registryname}

Now, let’s say you have a Docker image in another registry. As a first step, let’s pull the image on your local computer:

docker pull other.registry.io/samples/myimage 

If there are several versions of the preceding image, the latest will be pulled since no version was specified. The version of the image can be specified as follows:

docker pull other.registry.io/samples/myimage:version1.0 

Using the following command, you should see myimage within the list of local images:

docker images

Then, tag the image with the path you want to assign in the Azure registry:

docker tag myimage myregistry.azurecr.io/testpath/myimage

Both the name and destination tag may have versions (:<version name>).

Finally, push it to your registry with the following command:

docker push myregistry.azurecr.io/testpath/myimage

In this case, you can specify a version; otherwise, the latest version is pushed.

By doing this, you can remove the image from your local computer using the following command:

docker rmi myregistry.azurecr.io/testpath/myimage

Summary

In this chapter, we described what microservices are and how they have evolved from the concept of a module. Then, we talked about the advantages of microservices and when it is worth using them, as well as general criteria for their design. We also explained what Docker containers are and analyzed the strong connection between containers and microservice architectures.

Then, we took on a more practical implementation by describing all the tools that are available in .NET so that we can implement microservice-based architectures. We also described infrastructures that are needed by microservices and how Azure offers both container registries and container orchestrators.

The next chapter discusses the Azure Kubernetes orchestrator in detail.

Questions

  1. What is the two-fold nature of the module concept?
  2. Is scaling optimization the only advantage of microservices? If not, list some further advantages.
  3. What is Polly?
  4. What Docker support is offered by Visual Studio?
  5. What is an orchestrator and what orchestrators are available on Azure?
  6. Why is publisher/subscriber-based communication so important in microservices?
  7. What is RabbitMQ?
  8. Why are idempotent messages so important?

Further reading

The following are links to the official documentation for Azure Service Bus, RabbitMQ, and other event bus technologies:

Below are also links for Polly and Docker:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.151.141