15

Wrapping It All Up

At this point, we have discovered several patterns and nuances surrounding microservices design. Now, let us explore our patterns at a high level and tie all the concepts together. It is essential to scope which pattern fits best into each situation.

The microservices architectural approach to software development promotes loose coupling of autonomous processes and creating standalone software components that handle these processes. An excellent approach to scoping these processes is to employ the domain-driven design (DDD) pattern. In DDD, we categorize the system’s functionality into sub-sections called domains and then use these domains to govern what services or standalone apps are needed to support each domain. We then use the aggregator pattern to attempt to scope the domain objects needed per service.

Aggregator pattern

We scope the datas needed in each domain and what data needs to be shared between domains. At this point, we do risk duplicating data points across domains. Still, it is a condition we accept, given the need to promote autonomy across the services and their respective databases.

In scoping the data requirements, we use the aggregator pattern, which allows us to define the various data requirements and relationships the different entities will have. An aggregate represents a cluster of domain objects that can be seen as a single unit. In this scoping exercise, we seek to find a root element in this cluster, and all other entities are seen as domain objects with associations with the root.

The general idea in scoping our domain objects per service is to capture the minimum amount of data needed for each service to operate. This means we will try to avoid storing entire domain records in several services and instead allow our services to communicate to retrieve details that might be domain-specific and reside in another service. This is where we need our services to communicate.

Synchronous and asynchronous communication

Our microservices need to communicate from time to time. The type of communication that we employ is based on the type of operation that we need to complete in the end. Synchronous communication means that one service will directly call another and wait for a response. It will then use this response to inform the process it sought to complete. This approach is ideal for situations where one service might have some data and needs the rest from another. For instance, the appointment booking service knows the patient’s ID number but has no other information. It will then need to make a synchronous API call to the patients’ service and GET the patient’s details. It can then carry one to process those details as necessary.

Synchronous communication is great when we need instant feedback from another service. Still, it can introduce issues and increase response time when several other services must be consulted. We also run the risk of failures with each API call attempt, and one failure might lead to a total failure. We need to handle partial or complete failures gracefully and relative to the rules governing the business process. To mitigate this risk, we must employ asynchronous communication strategies to hand it off to a more stable and always-on intermediary that will transport data to the other services as needed.

Asynchronous communication is better used in processes that need another service’s participation but not necessarily immediate feedback. The process of booking an appointment, for instance, will need to complete several operations that involve other microservices and third-party services. The process, for instance, will take and save the appointment information, make a calendar entry, and send several emails and notifications. We can then use an asynchronous messaging system (such as RabbitMQ or Azure Service Bus) as the intermediary system that will receive the information from the microservice. The other services that need participation are configured to monitor the messaging system and process any data that appears. Each service can then individually complete its operation in its own time and independently. The appointment service can also confirm success based on its needs without worrying about whether everything has been done.

As we separate our business process and scope and figure out which operations require synchronous communication and which ones require asynchronous communication, we find that we need better ways to format our code and properly separate the moving parts of our application’s code. This is where we begin looking at more complex design patterns such as Command and Query Responsibility Separation (CQRS).

CQRS

CQRS is a popular pattern employed to allow developers to better organize application logic. It is an improvement to the originally drafted pattern called Command Query Separation (CQS), which sought to give developers a clean way to separate logic that augments data in the database (commands) from the logic that retrieves data (query).

By introducing this level of separation, we can introduce additional abstractions and adhere to our SOLID principles more easily. Here, we introduce the concept of handlers, which represent individual units of work to be done. These handlers are implemented to specifically complete an operation using the minimum needed and fewest number of dependencies. This allows the code to become more scalable and easier to maintain.

One downside to introducing this level of separation and abstraction is a major increase in the number of files and folders. To fully implement CQRS based on the recommended approach, we might also have several databases to support a single application. This is because the database used for the query operations needs to be optimized, which usually means we need a denormalized and high-speed lookup database structure. Our command operations might use a different database since storing data generally has stricter guidelines than reading it.

Using the MediatR pattern with CQRS helps us more easily refer to the specific handlers needed without introducing too many lines of code to make a simple function call. We have access to NuGet packages that help us to easily implement this pattern and reduce our overall development overhead.

Ultimately, this pattern should be leveraged for applications that have more complex business logic needs. It is not a recommended approach for standard applications that do the basic Create, Read, Update, and Delete (CRUD) operations, given the complexity level and project bloat it brings with it from an application code and supporting infrastructure perspective. It also introduces a new problem: keeping our read and write databases in sync.

Let us take the approach where we use separate databases for query and command operations. We run the risk of having out-of-date data available for read operations in between operations. The best solution for the disconnect between the databases is called event sourcing.

Event sourcing patterns

Event sourcing patterns bridge the gap between databases that need to be in sync. They help us track the changes across the system and act as a behind-the-scenes transport or lookup system to ensure that we always have the best data representation at any time.

First, an event represents a moment in time. The data contained in the event will indicate the type of action taken and the resulting data. This information can then be used for several reasons within the system:

  • Complete tasks for third-party services that need the resulting data for their operations
  • Update the database for query operations with the latest copy of the augmented record
  • Add to an event store as a versioning mechanism

Event sourcing can play several roles in a system and can aid us in completing several routines and unique tasks. Routine tasks within the context could include updating our read-only query database and acting as a source of truth for services that need to be executed based on the latest data after an operation.

Less routine operations would depend on implementing an event store, another database provisioned to keep track of each event and its copy of the data. This acts as a versioning mechanism that allows us to easily facilitate auditing activities, point-in-time lookups, and even business intelligence and analytics operations. By keeping track of each data version over time, we can see the precise evolution of the records and use it to inform business decisions.

Not surprisingly, this pattern works naturally with CQRS, as we can easily and naturally trigger our events from our handlers. We can even use the event store as our query database lookup location, easing the tension associated with reading stale data. We can then extend our query capabilities and leverage the version and point-in-time lookups we now have access to.

Through the previously mentioned NuGet packages that allow us to implement the MediatR pattern, we can raise events at the end of an operation. We can also implement handlers that subscribe to specific events and carry out their operations once an event is raised. This allows us to easily scale the number of subscribers per event and individually and uniquely implement operations that execute per event.

These patterns are implemented per service and in no way unify code spread across several individual applications. Ensure that the patterns you choose are warranted for the microservice. Between event sourcing and CQRS, we have increased the number of scoped databases from one to potentially three. This can introduce hefty infrastructural requirements and costs.

Now, let us review how we should handle database requirements in our microservices application.

Database per service pattern

The microservices architecture promotes autonomy and loose coupling of services. This concept of loose coupling should ideally be implemented throughout the entire code base and infrastructure. Sadly, this is only sometimes possible for cost reasons, especially at the database level.

Databases can be expensive to license, implement, host, and maintain. The costs also vary based on the needs of the service that the database supports and the type of storage that is needed. One compelling reason to have individual databases is that we always want to choose the best technology stack for each microservice. Each one needs to retain its individuality, and the database choice is integral to the implementation process.

We have different types of databases, and it is important to appreciate the nuances between each and use that knowledge to scope the best database solution for the data we can expect to store for each microservice. Let us look at some of the more popular options.

Relational databases

Relational databases store data in a tabular format and have strict measures to ensure that stored data is of the highest possible quality by its standards. They are best for systems that need to ensure data accuracy and might have several entities they need to store data for. They generally rely on a language called SQL to interact with data and, through a normalization process, will force us to spread data across several tables. This way, we can avoid repeating data and establish references to a record found in one table in other tables.

The downside is that the strict rules make it difficult to scale on demand, which leads to slower read times for data related to several entities.

Non-relational databases

Non-relational databases are also referred to as NoSQL databases, given their structural differences from traditional relational databases. They are not as strict regarding data storage and allow for greater scalability. They are best used for systems that require flexible data storage options, given rapidly changing requirements and functionality. They are also popularly used as read-only databases, given that they support the data being structured acutely to the system’s needs. The most popular implementations of these kinds of databases include document databases (such as MongoDB or Azure Cosmos DB), key-value databases (such as Redis), and graph databases (such as Neo4j).

Each type has its strengths and weaknesses. The document databases option is most popularly used as an alternative to a relational database, given that it offers a more flexible way to store all the data points but keeps them in one place. This, however, can lead to data duplications and a reduction in overall quality if not managed properly.

When considering the best database option for services, we must consider maintainability, technology maturity and ease of use, and general appropriateness for the task. One size certainly does not fit all, but we must also consider costs and feasibility. We have several approaches to implementing supporting databases for our services; each has pros and cons.

One database for all services

This is the ideal solution from a cost analysis and maintenance perspective. Database engines are powerful and designed to perform under heavy workloads, so having several services using the same database is not the most difficult to implement. The team also doesn’t need a diverse skill set to maintain the database and work with the technology.

This approach, however, gives us one point of failure for all services. If this database goes offline, then all services will be affected. We also forfeit the flexibility of choosing the best database technology to support the technology stack that best implements each service. While most technology stacks have drivers to support most databases, the fact remains that some languages work best with certain databases. Be very careful when choosing this approach.

One database per service

This solution provides maximum flexibility in our per-service implementations. Here, we can use the database technology that best serves the programming language and framework used and the microservice’s data storage needs. Services requiring a tabular data storage structure can rely on a relational database. By extension, microservices developed using PHP technology may favor a MySQL database, and using ASP.NET Core may favor Microsoft SQL Server. This will ease the attrition in supporting a database because a language might have less than adequate tooling. On the other hand, a NodeJS-based microservice might favor MongoDB since its data doesn’t need to be as structured and might evolve faster than the other services.

The obvious drawbacks here are that we need to be able to support multiple database technologies, and the skill sets must be present for routine maintenance and upkeep activities. We also incur additional costs for licensing and hosting options since the databases may (ideally, will) require separate server hosting arrangements.

Individually, each service needs to ensure its data is as accurate and reliable as possible. Therefore, we use a concept called transactions to ensure that data either gets augmented successfully or not. This is especially useful for relational databases where the data might be spread across several tables. By enforcing this all-or-nothing mechanism, we mitigate partial successes and ensure that the data is consistent across all tables.

Always choose the best technologies for the microservice you are constructing to address the business problem or domain. This flexibility is one of the more publicized benefits of having a loosely coupled application where the different parts do not need to share assets or functionality.

Conversely, having separate databases supporting autonomous services can lead to serious data quality issues. Recall that some operations need the participation of several services, and sometimes, if one service fails to augment its data store, there is no real way to track what has failed and take corrective measures. Each service will handle its transaction, but an operation involving several independent databases will run the risk of partial completion, which is bad. This is where we can look to the Saga pattern to help us manage this risk.

Using the saga pattern across services

The saga pattern is generally implemented to assist with the concept of all-or-nothing in our microservices application. Each service will do this for itself, but we need mechanisms to allow the services to communicate their success or failure to others and, by extension, act when necessary.

Take, for instance, if we have an operation that requires the participation of four services, and each one will store bits of data along the way; we need a way to allow the services to report on whether their database operations were successful. If not, we trigger rollback operations. Two ways we can implement our saga patterns are through choreography or orchestration.

Using choreography, we implement a messaging system (such as RabbitMQ or Azure Services Bus) where services notify each other of the completion or failure of their operations. There is no central control of the flow of messages. Still, each service is configured to act on the receipt of certain messages and publish messages based on the outcome of its internal operation. This is a good model where we want to retain each service’s autonomy, and no service needs any knowledge of the other.

Choreography seems straightforward in theory but can be complex to implement and extend when new services need to be added to the saga. In the long run, each time the saga needs modification, several touchpoints will need attention. These factors promote the orchestration approach as a viable alternative.

Using the orchestration method, we can establish a central observer that will coordinate the activities related to the saga. It will orchestrate each call to each service and decide on the next step based on the service’s success or failure response. The saga in the orchestrator is implemented to follow specific service calls in a specific order along a success track and, separately, along a failure track. If a failure occurs in the middle of the saga, the orchestrator will begin calling the rollback operations for each service that previously reported success.

Comparably, the orchestrator approach allows for better control and oversight of what is happening at each step of the saga but might be more challenging to implement and maintain in the long run. We will have just as many touchpoints to maintain as the saga evolves.

Your chosen approach should match your system’s needs and your desired operational behavior. Choreography promotes service autonomy but can lead to a spaghetti-like implementation for a large saga where we need to track which service consumes which message. This also makes it very difficult to debug. The orchestrator method forces us to introduce a central point of failure since if the orchestrator fails, nothing else can happen.

Both approaches, however, hinge on the overall availability of the services and dependencies involved in completing the operation. We need to ensure that we do not take the first failure as the final response and implement logic that will try an operation several times before giving up.

Resilient microservices

Building resilient services is very important. This acts as a safety net against transient failures that otherwise break our system and lead to poor user experiences. No infrastructure is bulletproof. Every network has failure points, and services that rely on an imperfect network are inherently also imperfect. Beyond the imperfections of the infrastructure, we also need to consider the general application load and the act that our request now might be one too many. This doesn’t mean that the service is offline; it just means that it is stressed out.

Not all failure reasons are under our control, but how our services react can be. By implementing retry logic, we can force a synchronous call to another service to make the call again until a successful call has been made. This helps us reduce the number of failures in the application and gives us more positive and accurate outcomes in our operations. Typical retry logic involves us making an initial call and observing the response. We try the call again when the response is something other than the expected outcome. We continue this until we receive a response that we can work with. This very simplified take on retry logic has some flaws, however.

We should only retry for a while since we are unsure if the service is experiencing an outage. In that case, we need to implement a policy that will stop making the retry calls after a certain number of attempts. We call this a circuit-breaker policy. We also want to consider that we want to add some time between the retry attempts.

Policies this complex can be implemented using simple code through a NuGet package called Polly. This package allows us to declare global policies that can be used to govern how our HttpClient services make API calls. We can also define specific policies for each API call.

Retries go a long way in helping us maintain the appearance of a healthy application. Still, prevention is better than a cure, and we prefer to track and mitigate failures before they become serious. For this, we need to implement health checks.

Importance of health checks

A health check, as the name suggests, allows us to track and report on the health of a service. Each service is a potential point of failure in an application, and each service has dependencies that can influence its health. We need a mechanism that allows us to probe the overall status of our services to be more proactive in solving issues.

ASP.NET Core has a built-in mechanism for reporting on the health of a service, and it can very simply tell us if the service is healthy, degraded, or unhealthy. We can extend this functionality to report the health of not only the service but to also account for the health of the connections to dependent services such as databases and caches.

We can also establish various endpoints that can be used to check different outcomes, such as general runtime versus startup health. This categorization comes in handy when we want to categorize monitoring operations based on the tools we are using, monitoring teams in place, or general application startup operations.

We can establish liveness checks, which can be probed at regular intervals to report on the overall health of an application that is expected to be running. We act whenever there is an unhealthy result, which will be a part of our daily maintenance and upkeep activities. When a distributed application is starting up, however, and several services depend on each other, we want to accurately determine which dependent service is healthy and available before we launch the service that depends on it. These kinds of checks are called readiness checks.

Given the complexity and often overwhelming number of services to keep track of in a distributed application, we tend to automate our hosting, deployment, and monitoring duties as much as possible. Containerization, which we will discuss shortly, is a standard way of hosting our applications in a lightweight and stable manner, and orchestration tools such as Kubernetes make it easy for us to perform health probes on the services and the container, which will inform us of the infrastructure’s health. Ultimately, we can leverage several automated tools to monitor and report on our services and dependencies.

We have spent some time exploring nuances surrounding our microservices and how they relate to each other. However, we have yet to discuss the nuances surrounding having one client or more client applications that need to relate to several services.

API Gateways and backend for frontend

An application based on the microservices architecture will have a user interface that will interact with several web services. Recall that our services have been designed to rule over a business domain, and many operations that users complete span several domains. Because of this, the client application will need to have knowledge of the services and how to interact with them to complete one operation. By extension, we can have several clients in web and mobile applications.

The problem is that we will need to implement too much logic in the client application to facilitate all the service calls, which can lead to a chatty client app. Then, maintenance becomes more painful with each new client that we introduce. The solution here is to consolidate a point of entry to our microservices. This is called an API gateway, and it will sit between the services and the client app.

An API gateway allows us to centralize all our services behind a single endpoint address, making it easier to implement API logic. After a request is sent to the central endpoint, it is routed to the appropriate microservice, which exists at a different endpoint. The API gateway allows us to create a central register for all endpoint addresses in our application and add intermediary operations to massage request and response data in between requests as needed. Several technologies exist to facilitate this operation, including a lightweight ASP.NET Core application called Ocelot. As far as cloud options go, we can turn to Azure API Management.

Now that we have a gateway, we have another issue where we have multiple clients, and each client has different API interaction needs. For instance, mobile devices will need different caching and security allowances than the web and smart device client apps. In this case, we can implement the backend for frontend pattern. This is much simpler than it sounds, but it needs to be properly implemented to be effective and can lead to additional hosting and maintenance costs.

This pattern behooves us to provide a specially configured gateway to cater to the targeted client app’s needs. If our healthcare application needs to be accessed by web and mobile clients, we will implement two gateways. Each gateway will expose a specific API endpoint that the relevant client will consume.

Now that we are catering to various client applications and devices, we need to consider security options that facilitate any client application.

Bearer token security

Security is one of the fundamental parts of application development that we need to get right. Releasing software that does not control user access and permissions can have adverse side effects in the long run and allow for the exploitation of our application.

Using ASP.NET Core, we have access to an authentication library called Identity Core, which supports several authentication methods and allows us to easily integrate authentication into our application and supporting database. It has optimized implementations for the various authentication methods and authorization rules we implement in web applications and allows us to easily protect certain parts of our application.

Typically, we use authentication to identify the user attempting to gain access to our system. This usually requires the user to input a username and a password. If their information can be validated, we can check what they are authorized to do and then create a session using their basic information. All of this is done to streamline an experience for the user where they can freely use different parts of the application as needed without reauthenticating each step. This session is also referred to as a state.

In API development, we do not have the luxury of creating a session or maintaining a state. Therefore, we require that a user authenticates each request to secured API endpoints. This means we need an efficient way to allow the user to pass their information with each request, evaluate it, and then send an appropriate response.

Bearer tokens are the current industry standard method of supporting this form of stateless authentication needs. A bearer token gets generated after the initial authentication attempt, where the user shares their username and password. Once the information is validated, we retrieve bits of information about the user, which we call claims, and combine them into an encoded string value, which we call a token. This token is then returned in the API response.

The application that triggered the authentication call initially will need to store this token for future use.

Now that the user has been issued a token, any follow-up API calls will need to include this token. When the API receives subsequent requests to secure endpoints, it will check for the presence of the token in the header section of the API request and then seek to validate the token for the following:

  • Audience: This is a value that depicts the expected receiving application of the token
  • Issuer: This states the application that was issued to the token
  • Expiration Date and Time: Tokens have a lifespan, so we ensure that the token is still usable
  • User claims: This information usually includes the user’s roles and what they are authorized to do

We can gauge all the points we wish to validate each time a request comes in with a token; the stricter the validation rules, the more difficult it is for someone to fake or reuse a token on an API.

Securing one API is simple enough, but this becomes very tedious and difficult to manage when this effort is spread across several APIs, as in a microservice-based application. It is not a good experience to have a user required to authenticate several times when accessing different parts of an application that might be using different services to complete a task. We need a central authority for token issuance and validation that all services can leverage. Essentially, we need to be able to use one token and validate a user across several services.

Considering this new challenge, we need to use an OAuth provider to secure our services centrally and handle our user information and validation. An OAuth provider application can take some time to configure and launch, so several companies offer OAuth services as SaaS applications. Several options exist to set up and host your OAuth provider instance, but this will require more maintenance and configuration efforts. The benefit of self-hosting is that you have more control over the system and the security measures you implement.

Duende IdentityServer is the more famed self-hosted option for an OAuth provider. It is based on ASP.NET Core and leverages Identity Core capabilities to deliver industry-standard security measures. It is free for small organizations and can be deployed as a simple web service and the central security authority for our microservices. They do also have a hosted model and can be compared with other hosted options, such as Microsoft Azure AD, and Auth0, to name a few.

Now that we have explored securing our microservices, we need to figure out the best way to host them alongside their various dependencies. Do we use a team of web servers, or do more efficient options exist?

Containers and microservices

We can typically host a web application or API and its supporting database on one server. This makes sense because everything is in one place and is easy to get to and maintain. But this server will also need to be very powerful and be outfitted to several applications and processes to support the different moving parts of the application.

Therefore, we should consider splitting the host considerations and placing the API and the database on separate machines. This costs more, but we get to maintain or host better and ensure that we do not burden either the machine or the environment with applications that are not needed.

When dealing with microservices, we will run into a challenging situation when attempting to replicate these hosting considerations for several services. We want each microservice to be autonomous functionally and from a hosting standpoint. Our services should share as little infrastructure as possible, so we don’t want to risk placing more than one service on the same machine. We also don’t want to burden a single device with supporting several hosting environment requirements since each microservice might have different needs.

We turn to container hosting as a lightweight alternative to provisioning several machines. Each container represents a slice of machine resources with optimized storage and performance resources needed for an application to run. Translating this concept into our hosting needs, we can create slices of these optimized environments for each microservice, database, and another third-party service as needed.

The advantage here is that we can still create optimal hosting environments for each service and supporting database, requiring far fewer machines to support this endeavor. Another benefit here is that each container is based on an image, representing the exact needs of the environment for the container. This image is reusable and repeatable, so we have less to worry about when transitioning between environments and trying to provision an environment per service. The image will always produce the same container, and there be no surprises during deployments.

Containers are widely used and supported in the development community. The premiere container hosting option is Docker, an industry-leading container technology provider. Docker provides an extensive repository of container images that we can leverage for safe and maintained images from popular third-party applications that we typically leverage during development. It is also an open community, so we can create containers for our own needs and add them to the community repository for later access, whether for public or private use.

When using .NET, we can generate a Dockerfile, a file containing declarations about the image that should be used to create a container for the service we wish to host. This dockerfile, written using a language called Yet Another Markup Language (YAML), outlines a base image, and then special build and deploy instructions. The base image states that we are borrowing information from an existing image and then we state that we wish to deploy our application to a container after combining the existing image and this application.

When we use container hosting, we generate a dockerfile for each service, and we need to orchestrate the order in which they are started and their dependencies. For instance, we probably don’t want to start a service before its supporting database’s container starts. For this, we must use an orchestrator. Industry-leading options include docker-compose and Kubernetes.

docker-compose is a simple and easy-to-understand option for container orchestration operations. docker-compose will refer to each dockerfile and allow us to outline any unique parameters we wish to include when executing this dockerfile. We can also outline dependencies and provide specific configuration values for the execution of that dockerfile and the resulting container. Now, we can orchestrate the provisioning of the containers to support our web services, databases, and other applications with one command. We can even reuse dockerfiles to create more than one container and have several containers with the same service on a different port and possibly with different configurations. We can see where this can come in handy when implementing the backend for frontend pattern.

Container hosting is platform-agnostic – we can leverage several hosting options, including cloud hosting options. Major cloud hosting providers such as Microsoft Azure and Amazon Web Services provide container hosting and orchestration support.

Now that we have our hosting sorted out, we need to be able to track what is happening across the application. Each service should provide logs of its activities, and more importantly, we need to be able to trace the logs across the various services.

Centralized logging

Logging is an essential part of post-deployment and maintenance operations. Once our application has been deployed, we need to be able to track and trace errors and bottlenecks in our application. This is easy enough to accomplish when we have one application and one logging source. We can always go to one space and retrieve the logs of what has happened.

.NET has native support for simple to advanced logging options. We can leverage the native logging operations and support powerful integrations for several logging destinations, such as the following:

  • Console: Shows the log outputs in a native console window. Usually used during development.
  • Windows Event Log: Also knowns as Event Viewer, this is a convenient way to view logs of several applications on a Windows-based machine.
  • Azure Log Stream: Azure has a central logging service that supports logging for the application.
  • Azure Application Insights: A power log aggregation service provided by Microsoft Azure.

When writing logs, we need to decide on the type of information we are logging. We want to avoid logging sensitive information such as compromising user or system information since we want to protect the integrity of our system and user secrets as much as possible. This will be relative to the context under which the application operates. Still, responsibility, wisdom, and maturity must be exercised during this scoping exercise. We also want to consider that we do not want to include too much clutter in the logs. Having chatty logs can be as bad as having no logs at all.

We also want to ensure we choose the correct classification for each log message. We can log messages as being any of the following levels:

  • Information: General information about an operation.
  • Debug: Usually used for development purposes. Should not be visible in a live environment.
  • Warning: Depicts that something might not have gone as expected but is not a system error.
  • Error: This occurs when an operation fails. They are usually used when an exception is caught and/or handled.
  • Critical/Fatal: Used to highlight that an operation has failed and led to a system failure.

Choosing the correct classification for the log messages goes a long way in helping the operations team to monitor and track messages that need to be prioritized.

We can also add unique configurations for each logging destination and fine-tune the types of messages each will receive. This ability becomes relevant if we only wish to log messages that are informational to the Windows event log and all warnings, errors, and critical messages should be visible in Azure Log Stream and Application Insights. .NET Core allows us to make these granular adjustments.

We can further extend the capabilities of the native logging libraries by using extension packages such as Serilog. Serilog is the most popular logging extension library used in .NET applications. It supports more logging destinations such as rolling text files, databases (SQL Server, MySQL, PostgreSQL, and more), and cloud providers (Microsoft Azure, Amazon Web Services, and Google Cloud Platform), to name a few. We can write to multiple destinations with each log message by including this extension package in our application.

Individual application logging can be set up relatively quickly, but this concept becomes complex when we attempt to correlate the logs. When a user has trouble accessing one feature, we need to check several possible points of failure, considering that our microservice application will trigger several actions across several services. We need an efficient way to collate the logs produced by each service and, by extension, be able to trace and relate calls associated with a single operation.

Now, we turn to log aggregation platforms. Simply put, they act as log destinations and are designed to store all logs that are written to them. They also provide a user interface with advanced querying support. This is needed for a distributed application since we can now configure the aggregator as a central logging destination for several applications, and we can more easily query the logs to find logs that might be related but from different sources. We can also configure them to monitor and alert when logs of specific categorizations are received.

Popular options for log aggregation include Seq, the Elastisearch, Logstash, and Kibana (ELK) stack, and hosted options such as Azure Application Insights and DataDog. Each platform has its strengths and weaknesses and can be leveraged for small to large applications. Seq is a popular option for small to medium-sized applications, and it has easy-to-use tools and supports robust querying operations. Still, aggregators have some limitations, and those come up when we need to properly trace logs from several sources.

Tracing logs from several sources is referred to as distributed logging. It involves us using common information in our log messages and tracing related tags and trace IDs to correlate logs to a single event. This requires us to write more enriched logs containing more details and headers that a log tracing tool can use and give us the best possible information about. An emerging technology to support this concept is OpenTelemetry, which will produce logs with greater detail and correlation from our various applications.

We can now use more specialized tools, such as Jaeger, to sift through the enriched logs and perform even more complex queries across the logs. Jaeger is a free, lightweight, and open source tool that can get us started with this concept, but we can once again use Microsoft Azure Insights for production workloads.

Summary

In this chapter, we explored the various moving parts of microservices and how we can leverage different development patterns to ensure that we deliver a stable and extendable solution. We saw where the microservices architecture has a problem for every solution it introduces, and we need to ensure that we are aware of all the caveats of each decision we make.

Ultimately, we need to ensure that we properly assess and scope the needs of our application and refrain from introducing a microservices architecture where it might not be required. If we end up using one, we must ensure that we make the best use of the various technologies and techniques that support our application. Always seek to do the minimum necessary to address an issue before introducing complexity in the name of advanced architecture.

I hope you enjoyed this journey and have enough information to inform the decision-making and development processes that will be involved when you start developing microservices with ASP.NET.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.205.183