© Rahul Sharma, Akshay Mathur 2021
R. Sharma, A. MathurTraefik API Gateway for Microserviceshttps://doi.org/10.1007/978-1-4842-6376-1_1

1. Introduction to Traefik

Rahul Sharma1   and Akshay Mathur2
(1)
Patpargunj, Delhi, India
(2)
Gurgaon, Haryana, India
 

Over the last couple of years, microservices have become a mainstream architecture paradigm for enterprise application development. They have replaced the monolithic architecture of application development, which was mainstream for the past couple of decades. Monolithic applications are developed in a modular architecture. This means that discrete logic components, called modules , are created to segregate components based on their responsibility. Even though an application consisted of discrete components, they were packaged and deployed as a single executable. Overall, the application has very tight coupling. Changes to each of these modules can’t be released independently. You are required to release a complete application each time.

A monolithic architecture is well suited when you are building an application with unknowns. In such cases, you often need quick prototyping for every feature. Monolithic architecture helps in this case, as the application has a unified code base. The architecture offers the following benefits.
  • Simple to develop.

  • Simple to test. For example, you can implement end-to-end testing by launching the application and testing the UI with Selenium.

  • Simple to deploy. You only have to copy the packaged application to a server.

  • Simple to scale horizontally by running multiple copies behind a load balancer.

In summary, you can deliver the complete application quickly in these early stages. But as the application grows organically, the gains erode. In the later stages, the application becomes harder to maintain and operate. Most of the subcomponents get more responsibility and become large subsystems. Each of these subsystems needs a team of developers for its maintenance. As a result, the complete application is usually maintained by multiple development teams. But the application has high coupling, so development teams are interdependent while making new features available. Due to a single binary, the organization faces the following set of issues.
  • Quarterly releases: Application features take more time to release. Most of the time, an application feature needs to be handled across various subsystems. Each team can do their development, but deployment requires the entire set of components. Thus, teams can seldom work independently. Releases are often a big coordinated effort across different teams, which can be done only a couple of times per period.

  • Deprecated technology: Often, when you work with technology, you must upgrade it periodically. The upgrades make sure all vulnerabilities are covered. Application libraries often require frequent upgrades as they add new features as well. But upgrading the libraries in a monolith is difficult. A team can try to use the latest version, but often needs to make sure that the upgrade does not break other subsystems. In certain situations, an upgrade can even lead to a complete rewrite of subsystems, which is a very risky undertaking for the business.

  • Steep learning curve : Monolithic applications often have a large code base. But the individual developers are often working on a very small subset of the codebase. At first glance, the lines of code create a psychological bottleneck for developers. Moreover, since the application is tightly coupled, developers usually need to know how others invoke the code. Thus, the overall onboarding time for a new developer is large. Even the experienced developers find it hard to make changes to modules that have not been maintained well. This creates a knowledge gap that widens over time.

  • Application scaling : Typically, a monolithic application can only be scaled vertically. It is possible to scale the application horizontally, but you need to determine how each subsystem maintains its internal state. In any case, the application requires resources for all subsystems. Resources can’t be selectively provided to subsystems under load. Thus, it is an all-or-nothing scenario with a monolithic application. This is often a costly affair.

When faced with challenges, organizations look for alternative architectures to address these issues.

Microservice Architecture

Microservice architecture is an alternative to the monolithic architecture (see Figure 1-1). It converts the single application to a distributed system with the following characteristics.
  • Services: Microservices are developed as services that can work independently and provide a set of business capabilities. A service may depend on other services to perform the required functionality. Independent teams can develop each of these services. The teams are free to select and upgrade the technology they need for their service. An organization often delegates full responsibility for the services to their respective teams. The teams must ensure that their respective service runs as per the agreed availability and meets the agreed quality metrics.

  • Business context : A service is often created around a business domain. This makes sure that it is not too fine-grained or too big. A service needs to answer first if it is the owner of the said business function or the consumer of the function. A function owner must maintain all the corresponding function data. If it needs some more supporting function, it may consume the same from another service. Thus determining business context boundaries helps keep a check on the service dependencies. Microservices aim to build a system with loose coupling and high cohesion attributes. Aggregating all logically related functionality makes the service an independent product.

  • Application governance : In enterprise systems, governance plays an important role. You rarely want to make systems that are difficult to run. Due to this, a governance group keeps check on the technologies used by developers so that the operations team can still run the system. But microservice architecture provides the complete ownership to the respective teams. The ownership is not limited to development. It also delegates service operations. Due to this, most organizations must adopt DevOps practices. These practices enable the development teams to operate and govern a service efficiently.

  • Automation : Automation plays an important role in microservices. It applies to all forms like infrastructure automation, test automation, and release automation. Teams need to operate efficiently. They need to test more often and release quickly. This is only possible if they rely more on machines and less on manual intervention. Post-development manual testing is a major bottleneck. Thus, teams often automate their testing in numerous ways like API testing, smoke testing, nightly tests, and so forth. They often perform exploratory testing manually to validate the build. Release and infrastructure preparation is often automated by using DevOps practices.

../images/497627_1_En_1_Chapter/497627_1_En_1_Fig1_HTML.jpg
Figure 1-1

Monolith vs. microservices

In summary, a monolith has a centralized operating model. This means that all code resides in one place; everyone uses the same library, releases happen simultaneously, and so forth. But on the other end, microservices is a completely decentralized approach. Teams are empowered to make the best decisions with complete ownership. Adopting such an architecture not only asks for a change in software design, but it also asks for a change in organizational interaction. Organizations reap the following benefits of such application design.

Agility

This is one of the biggest driving factors for an organization adopting the microservices architecture. Organizations become more adaptive, and they can respond more quickly to changing business needs. The loose coupling offered by the architecture allows accelerated development. Small, loosely coupled services can be built, modified, and tested individually before deploying them in production. The model dictates small independent development teams working within their defined boundaries. These teams are responsible for maintaining high levels of software quality and service availability.

Innovation

The microservice architecture promotes independent small development teams supporting each service. Each team has ownership within their service boundary. They are not only responsible for development but also for operating the service. The teams thus adopt a lot of automation and tools to help them deliver these goals. These high-level goals drive the engineering culture within the organization.

Moreover, development teams are usually well aware of the shortcomings of their services. Such teams can address these issues using their autonomous decision-making capability. They can fix the issues and improve service quality frequently. Here again, teams are fully empowered to select appropriate tools and frameworks for their purpose. It ultimately leads to the improved technical quality of the overall product.

Resilience

Fault isolation is the act of limiting the impact of a failure to a limited subsystem/component. This principle allows a subsystem to fail as long as it does not impact the complete application. The distributed nature of microservice architecture offers fault isolation, a principal requirement to build resilient systems. Any service which is experiencing failures can be handled independently. Developers can fix issues and deploy new versions while the rest of the application continues to function independently.

Resilience, or fault tolerance , is often defined as the application’s ability to function properly in the event of a failure of some parts. Distributed systems like microservices are based on various tenets like circuit breaking, throttling to handle fault propagation. This is an important aspect; if done right, it offers the benefits of a resilient system. But if this is left unhandled, it leads to frequent downtime due to failures cascading. Resilience also improves business agility as developers can release new services without worrying about system outages.

Scalability

Scalability is defined as the capability of a system to handle the growth of work. In a monolith, it is easy to quantify the system scalability. In a monolithic system, as the load increases, not all subsystems get proportionally increased traffic. It is often the case that some parts of the system get more traffic than others. Thus, the overall system performance is determined by a subset of the services. It is easier to scale a monolithic system by adding more hardware. But at times, this can also be difficult as different modules may have conflicting resource requirements. Overall an overgrown monolith underutilizes the hardware. It often exhibits degraded system performance.

The decoupling offered by microservices enables the organization to understand the traffic that each microservice is serving. The divide and conquer principle helps in improving the overall system performance. Developers can adopt appropriate task parallelization or clustering techniques for each service to improve the system throughput. They can adopt appropriate programming languages and frameworks, fine-tuned with the best possible configuration. Lastly, hardware can be allocated by looking into service demand rather than scaling the entire ecosystem.

Maintainability

Technical debt is a major issue with monolithic systems. Overgrown monoliths often have parts that are not well understood by the complete team. Addressing technical debt in a monolith is difficult as people often fear of breaking any of the working features. There have been cases where unwanted dead code was made alive by addressing technical debt on a particular monolith.

Microservice architecture helps to mitigate the problem by following the principle of divide and conquer. The benefits can be correlated with an object-oriented application design where the system is broken into objects. Each object has a defined contract and thus leads to improved maintenance of the overall system. Developers can unit test each of the objects being refactored to validate the correctness. Similarly, microservices created around a business context have a defined contract. These loosely coupled services can be refactored and tested individually. Developers can address the technical debt of the service while validating the service contract. Adopting microservices is often referred to as a monolith’s technical debt payment.

You have looked at the advantages of Microservice architecture. But the architecture also brings a lot of challenges. Some challenges are due to the distributed nature of the systems, while others are caused by diversity in the application landscape. Services can be implemented in different technologies and scaled differently. There can be multiple versions of the same service serving different needs. Teams should strategize to overcome these challenges during application design and not as an afterthought. Application deployment is one such important aspect. Monoliths have been deployed on a three-tier model. But the same model does not work well with microservices. The next section discusses the changes required in the deployment model.

n-Tier Deployment

n-tier deployment is a design implementation where web applications are segregated into application presentation, application processing, and data management functions. These functions are served by independent components known as tiers . The application tiers allow segregation of duties. All communication is linear across the tiers. Each tier is managed by its own software subsystem. The n-tier deployment offers the benefit of improved scalability of the application. Monolithic applications are usually deployed as three-tiers (see Figure 1-2) applications.
  • Presentation tier : This tier is responsible for serving all static content of the application. It is usually managed by using web servers like Apache, Nginx, and IIS. These web servers not only serve applications static UI components but also handle dynamic content by routing requests to the application tier. Web servers are optimized to handle many requests for static data. Thus, under load, they perform well. Some of these servers also provide different load balancing mechanisms. These mechanisms can support multiple nodes of the application tier.

  • Application tier : This tier is responsible for providing all processing functions. It contains the business processing logic to deliver the core capabilities of an application. The development team is responsible for building this in a suitable technology stack like Java, Python, and .NET. This tier is capable of serving a user request and generating an appropriate dynamic response. It receives requests from the presentation tier. To serve the request, the application tier may need additional data to interact with the data tier.

  • Data tier : This tier provides capabilities of data storage and data retrieval. These data management functions are outside the scope of the application. Thus, an application uses a database to fulfill these needs. The data tier provides data manipulation functions using an API. The application tier invokes this API.

../images/497627_1_En_1_Chapter/497627_1_En_1_Fig2_HTML.jpg
Figure 1-2

Three-tier

There are many benefits to using a three-layer architecture, including scalability, performance, and availability. You can deploy the tiers on different machines and can use the available resources in an optimized manner. The application tier delivers most of the processing capability. Thus, it needs more resources. On the other hand, the web servers serve static content and do not need many resources. This deployment model improves application availability by having different replication strategies for each tier.

Four-Tier Deployment

The three-tier deployment works in line with monolith applications. The monolith is usually the application tier. But with microservices, the monolith is converted into several services. Thus the three-tier deployment model is not good enough to handle microservice architecture. It needs the following four-tier deployment model (see Figure 1-3).
  • Content delivery tier : This tier is responsible for delivering the content to the end user. A client can use an application in a web browser or on a mobile app. It often asks for making different user interfaces targeted across different platforms. The content delivery tier is responsible for ensuring that the application UI is working well on these different platforms. This tier also abstracts the services tier and allows developers to quickly develop new services for the changing business needs.

  • Gateway tier : This tier has two roles.
    • Dynamically discover the deployed services and correlate them with the user request

    • Route requests to services and send responses

../images/497627_1_En_1_Chapter/497627_1_En_1_Fig3_HTML.jpg
Figure 1-3

Four-tier

For each request, the gateway layer receives data from all the underlying services and sends back a single aggregated response. It has to handle different scenarios like role-based access, delayed responses, and error responses. These behaviors make it easier for the service tier. The service tier can focus only on the business requirements.
  • Services tier : This tier is responsible for providing all business capabilities. The services tier is designed for a microservices approach. This tier provides data to its clients without concern for how it is consumed. The clients can be other services or application UI. Each of the services can be scaled based on their requests load pattern. The clients have the responsibility to determine the new instances. All of this enables a pluggable approach to the application ecosystem. New services can be built by consuming existing services. They can be readily integrated into the enterprise landscape.

  • Data tier : This tier provides capabilities of data storage and data retrieval. Data management capabilities are still beyond the application scope. But each service has an exclusive data management infrastructure. It can be DBMS like MySQL or a document store like Mongo.

The four-tier architecture (see Figure 1-3) was pioneered by early microservices adopters like Netflix, Amazon, and Twitter. At the center of the paradigm, the gateway tier is responsible for binding together the complete solution. The gateway needs a solution that can link the remaining tiers together so all of them can communicate, scale, and deliver. In the three-tier architecture, the presentation tier had webservers that can be adopted for the gateway tier. But first, you should determine the characteristics required to be a gateway tier solution.

Gateway Characteristics

A gateway is the point of entry for all user traffic. It is often responsible for delegating the requests to different services, collate their responses, and send it back to the user. Under microservice architecture, the gateway must work with the dynamic nature of the architecture. The following sections discuss the different characteristics of the gateway component.

Application Layer Protocols

The OSI networking model handles traffic at Layer 4 and Layer 7. Layer 4 offers only low-level connection details. Traffic management at this layer can only be performed using a protocol (TCP/UDP) and port details. On the other hand, Layer 7 operates at the application layer. It can perform traffic routing based on the actual content of each message. HTTP is one of the most widely used application protocols. You can inspect HTTP headers and body to perform service routing.

Layer 7 load balancing enables the load balancer to make smarter load-balancing decisions. It can apply various optimizations like compressions, connection reuse, and so forth. You can also configure buffering to offload slow connections from the upstream servers to improve overall throughput. Lastly, you can apply encryption to secure our communication.

In the current ecosystem, there are a wide variety of application protocols to choose from. Each of these protocols serves a set of needs. Teams may adapt a particular application protocol, let’s say gRPC because it is better suited for their microservice. This does not require the other teams to adapt to the same application protocol. But in the ecosystem, the gateway needs to delegate traffic to most of these services. Thus, it needs to have support for the required protocol. The list of application protocols is extensive. Consequently, the gateway needs to have a rich set of current protocols. Moreover, it should be easy to extend this list by adding new protocols .

PROTOCOLS

HTTP/2 is the next version of the HTTP/1.1 protocol. It is a binary protocol and does not change any of the existing HTTP semantics. But it offers real-time multiplex communication and improves the application performance by employing better utilization of underlying TCP connections.

gRPC is a binary RPC protocol. It offers various features, such as multiplexing, streaming, health metrics, and connection pooling. It is often used with payload serialization like JSON or protocol buffers.

REST (REpresentational State Transfer) is an application protocol based on HTTP semantics. The protocol represents resources that are accessed using HTTP methods. It is often used with a JSON payload to describe the state.

Another important aspect is the interprocess communication paradigm. Traditionally, we create synchronous applications based on HTTP. But with data-driven microservices, you may want to adopt an asynchronous model, like ReactiveX and Zeromq. A gateway component needs to support both these forms of communication. Developers should be able to pick and choose which model works for their application.

Dynamic Configuration

In a monolith application, you know the location of our backend application. The location does not change often, and more instances of the application are not created at runtime. Since most servers are known, it is easier to provide these details in a static configuration file.

But in a microservices application, that solution does not work. The first challenge arises from the number of microservices. Usually, there are limited services at the start. But as the system grows, people realize there can be multiple fine-grained services for every business function. Often the number can grow to a couple of hundred services. It is a daunting task to allocate a static address to each of these services and maintain the same updates in a static file.

The second challenge arises from scalability offered by microservices. Services can be replicated during load times. These services are removed when the load subsides. This runtime behavior of microservices gets multiplied by the number of services in the ecosystem. It is impossible to keep track of all these changes in a static configuration file.

To solve discovery issues, a microservice architecture advocates a service registry. It is a database containing the network locations of service instances. The service registry needs to be updated in near real time. It needs to reflect the new locations as soon as they are available. A service registry needs to have high availability. Consequently, it consists of a cluster of nodes that replicate data to maintain consistency.

SERVICE REGISTRY PROVIDERS

The following are the most widely used service registry providers.

Eureka is a REST-based solution for registering and querying service instances. Netflix developed the solution as part of its microservices journey. It is often used in the AWS cloud.

etcd is a highly available, distributed, consistent, key-value store. It is used for shared configuration and service discovery. Kubernetes uses etcd for its service discovery and configuration storage.

Consul is a solution for discovering and configuring services created by Hashicorp. Besides the service registry, Consul provides extensive functions like health-checking and locking. Consul provides an API that allows clients to register and discover services.

Apache Zookeeper was created for the Hadoop ecosystem. It is a high-performance coordination service for distributed applications. Curator is a Java library created over Zookeeper to provide service discovery features.

The gateway component needs to interact with the service registry. It can try to poll the service registry, but that is not efficient. Alternatively, the service registry needs to push the changes to the gateway. The gateway needs to pick these changes and reconfigure itself. Thus, in summary, the gateway needs to integrate well with the registry.

Hot Reloads

In a microservice architecture, numerous services are deployed. Each of these existing services may be updated, or new services may be added. All these changes need to be propagated to the gateway tier. Additionally, the gateway component may need some upgrades to address issues. These operations must be performed without making any impact on the end user. A downtime of even a few seconds is detrimental. If the gateway requires downtime for service updates, then the downtime gets multiplied by the frequency of service updates. In summary, this can lead to frequent service outages.

The gateway component should handle all updates without requiring any restart. It must not make any kind of distinction between configuration update or upgrade.

Observability

Observability is a concept borrowed from control theory. It is the process of knowing the state of the system while being outside the system. It is about all information you need to diagnose failures. Observability in microservices is completely different from the one in monolith systems. In monolith applications, there is a three-tier deployment with the following logs.
  • Request log

  • Application log

  • Error log

You can connect back the logs to determine (with fair accuracy) what the system had been performing. But in a microservice architecture, there are tens or hundreds of different services you need to keep track of. Using only logs to predict the application state is no longer possible. We need new mechanisms for this purpose. The microservice architecture recommends the following methods.

Tracing

Request tracing is a method to profile and monitor distributed architectures such as microservices. In microservices, a user request is typically handled by multiple services. Each of these services performs its respective processing. All of this is recorded in the form of request-spans. All these spans of a request are combined into a single trace for the entire request. Thus, request tracing shows the time spent by each service for a particular request.

Any service failure can easily be seen in a request trace. The trace also helps in determining performance bottlenecks. Tracing is a great solution for debugging application behavior, but it comes at the cost of consistency. All services must propagate proper request spams. If a service does not provide a span or regenerates the span by neglecting the existing headers, then the resultant request trace is not able to capture the said service.

The gateway component receives all traffic from outside the microservices ecosystem. It may distribute the request across different services. Thus, the gateway component needs to generate request spans for tracing.

Metrics

Microservice best practices recommend generating application metrics that can be analyzed. These metrics project the state of our services. Over time, collecting metrics helps with analyzing and improving service performance. In failure scenarios, metrics help determine the root cause. Application-level metrics can include the number of queued inbound HTTP requests, request latency, database connections, and the cache hit ratio. Applications must also create custom metrics that are specific to their context. The gateway component must also export relevant metrics that can be captured. The metrics can be around Status code across different application protocols like HTTP(2XX,4XX,3XX,5XX), service error rates, request queue, and so forth.

In summary, the gateway component needs to offer a wide variety of observability output. It must export stats, metrics, and logs to integrate with monitoring solutions in the microservice architecture.

TLS termination

Data security is often a non-functional requirement of a system. Applications have been achieving data security using TLS communication. TLS communication allows data to be encrypted/decrypted using private-public key pairs. The process of TLS termination at the gateway or presentation tier enables applications to perform better as applications do not have to handle the encryption and decryption themselves. This worked well in traditional architectures as the interprocess network calls were minimal. But in a microservice architecture, many services are running in the ecosystem. As per security standards, unencrypted communication between services presents a grave risk. Thus, as a best practice, you are required to encrypt all network communication throughout the cluster.

Service authorization is another challenge in a microservice architecture. In microservices, many more requests are made over the network. A service needs to make sure which client is making invocations. This helps place limits if the client service is malfunctioning. Putting these controls is necessary as a rouge service and wreak havoc in the system. Identity can be established in many ways. Clients can pass bearer tokens, but this process is outdated. Bearer tokens can be captured and passed by a potential attacker. As a best practice, you want to ensure that clients are only authenticated using non-portable identities. Mutual TLS (mTLS) authentication is thus a recommended practice. For services to authenticate with each other, they each need to provide a certificate and key that the other trusts before establishing a connection. This action of both the client and server providing and validating certificates is referred to as mutual TLS . It ensures that strong service identities are enforced and exchanged as part of interprocess communication. Thus, the gateway component needs to have dual behaviors.
  • TLS termination for traffic from the outside world

  • TLS identity for invoking different services using mutual TLS

Other Features

The gateway component performs the dual responsibilities of a reverse proxy and a load balancer. It must provide support for advanced load balancing techniques. Moreover, the component needs to support the following features.
  • Timeouts and retries

  • Rate limiting

  • Circuit breaking

  • Shadowing and buffering

  • Content-based routing

The list of features is not limited. Load balancers often implement various security features like authentication and DoS mitigation using IP address tagging and identification and tarpitting. The gateway components must address all these needs as well.

We have discussed the high-level behaviors expected from a gateway solution. These needs were a wish list from established market products like Apache, Nginx, and HAProxy. These servers provide support for a few features, but some of the features have to be handled using workarounds. In summary, these battle-tested solutions do not have first-class support for microservice architecture. These products had developed their architectures a decade back when the list of requirements was different. The next section discusses Traefik, an open source product created to handle microservices deployment needs.

Traefik

Traefik is an open source API gateway. It was designed to simplify complexity regarding microservices operations. Traefik achieves the same by performing autoconfiguration of services. As per the product documentation, developers should be responsible for developing services and deploying them. Traefik can autoconfigure itself with sensible defaults and send a request to the said service.

Today’s microservices have changing needs. Traefik supports all these needs by following a pluggable architecture. It supports every major cluster technology such as Kubernetes, Docker, Docker Swarm, AWS, Mesos, and Marathon (see Figure 1-4). All these engines have their own integration points, also known as providers . There is no need to maintain a static configuration file. The provider is responsible for connecting to the orchestration engine and determining the services running on it. It then passes this information back to the Traefik server, which can apply this to its routing. Traefik is capable of integrating with multiple providers at the same time.
../images/497627_1_En_1_Chapter/497627_1_En_1_Fig4_HTML.jpg
Figure 1-4

Traefik

Traefik was developed in a Unix-centric way. It was built in Golang. It delivers a fair performance. It has seen some memory issues. But there is a large community of active developers working on Traefik. The overall performance offered by Traefik is a little less than the established market leaders like Nginx, but it makes up for it by providing first-class support for all microservices features.

Note

Traefik has more than 25K GitHub stars (at the time of writing), making it one of the most actively viewed projects.

Installation

Traefik is released often. Binary artifacts for these releases are available on the project release page (https://github.com/containous/traefik/releases). The product is released for every supported OS and architecture. At the time of writing, the book Traefik 2.2.0 was the latest release (see Figure 1-5).
../images/497627_1_En_1_Chapter/497627_1_En_1_Fig5_HTML.jpg
Figure 1-5

Traefik release page

For the remainder of the chapter, we work with a macOS version, but you can download a suitable release using any of the following methods.
$ ls -al
total 150912
-rw-rw-r--@  1 rahulsharma  staff  551002   Mar 25 22:38 CHANGELOG.md
-rw-rw-r--@  1 rahulsharma  staff  1086     Mar 25 22:38 LICENSE.md
-rwxr-xr-x@  1 rahulsharma  staff  76706392 Mar 25 22:55 traefik

The single binary provides a simplified and streamlined experience while working across different platforms. Let’s now learn how to work with Traefik.

Traefik Command Line

Traefik can be started by invoking the traefik command. The single command can do any of the following.
  • Configure Traefik based on the provided configuration

  • Determine the Traefik version

  • Perform health-checks on Traefik

It is important to understand how the traefik command supports each of these behaviors. There are several parameters offered in the traefik command. You will work with them once we reach the relevant topics. As a first step, let’s validate the version of Traefik by executing the following command.
$ ./traefik version
Version:      2.2.0
Codename:     chevrotin
Go version:   go1.14.1
Built:        2020-03-25T17:17:27Z
OS/Arch:      darwin/amd64
This output not only shows the Traefik version, but it also shows information related to the platform and the date on which the Traefik binary was created. In general, the traefik command has the following syntax.
traefik [sub-command] [flags] [arguments]
In this command, all arguments are optional. Now you configure Traefik by invoking the command. It is important to note that Traefik configuration can be provided in the following ways.
  • A configuration file

  • User-specified command-line flags

  • System environment variables

They are evaluated in the order listed. Traefik applies a default value if no value is specified. You can execute the command without passing any of these values.
$ ./traefik
INFO[0000] Configuration loaded from flags.
This output tells you that Traefik has started. It is configured with flag-based configuration. The command starts listening on port 80. Let’s now validate this by doing a cURL for http://localhost/.
$ curl -i http://localhost/
HTTP/1.1 404 Not Found
Content-Type: text/plain; charset=utf-8
X-Content-Type-Options: nosniff
Date: Fri, 01 May 2020 16:16:32 GMT
Content-Length: 19
404 page not found

The cURL request gets a 404 response from the server. We revisit the configuration details in the next chapters when we discuss entry-points, routers, and services.

Traefik API

Traefik also provides the REST API, which accesses all the information available in Traefik. Table 1-1 describes a few major endpoints.
Table 1-1

API Endpoints in Traefik

Endpoint

Description

/api/version

Provides information about the Traefik version

/api/overview

Provides statistics about HTTP and TCP along with the enabled features and providers

/api/entrypoints

Lists all the entry points information.

/api/http/services

Lists all the HTTP services information

/api/http/routers

Lists all the HTTP router information

/api/http/middlewares

Lists all the HTTP middlewares information

The full list of APIs is available at traefik.io/v2.2/ operations/api/#endpoints. The API is disabled by default. It needs to be enabled by passing appropriate flags. You can activate the API by starting Traefik with api.insecure flag, which deploys the REST API as a Traefik endpoint.
rahulsharma$ ./traefik -api.insecure true
INFO[0000] Configuration loaded from flags.
Now let’s do a lookup for http://localhost:8080/api/overview in the browser. The output shows statistics returned from the API (see Figure 1-6).
../images/497627_1_En_1_Chapter/497627_1_En_1_Fig6_HTML.jpg
Figure 1-6

API overview output

The behavior can also be achieved by using TRAEFIK_API_INSECURE environment variable. The environment variable is equivalent to api.insecure flag. Let’s run the command again by setting the environment variable.
rahulsharma$ export TRAEFIK_API_INSECURE=True
rahulsharma$ ./traefik
INFO[0000] Configuration loaded from environment variables.
Note

Enabling the API on production systems is not recommended. The API can expose complete infrastructure and service details, including sensitive information.

The preceding command deployed the Traefik API in an insecure mode. This is not recommended. Traefik should be secured by authentication and authorization. Moreover, the API endpoint should only be accessible within the internal network and not exposed to the public network. The book covers these practices in further chapters.

Traefik Dashboard

Traefik API comes out of the box with a dashboard (see Figure 1-7). The dashboard is for viewing purposes only. It displays the status of all components configured in Traefik. The dashboard also displays how each of these deployed components is performing. The dashboard is a visual representation that can be used by operation teams for monitoring purposes. Once you have started Traefik in an insecure manner, look up http://localhost:8080/dashboard#/.
../images/497627_1_En_1_Chapter/497627_1_En_1_Fig7_HTML.jpg
Figure 1-7

Traefik dashboard

The dashboard shows TCP and UDP services. There are two listening ports with HTTP-based applications. The dashboard also captures the error rate from each service.

Summary

In this chapter, you looked at how the adoption of microservices has changed the requirements from a gateway. We discussed the various behaviors expected from a gateway component. Established market products like Nginx and HAProxy have tried to adapt these features. But these products have been unable to provide first-class support of all needs. Traefik was built to support these needs. There are various ways to configure Traefik. Configuration can be passed from file, parameters, or environment variables. It has sensible defaults for all unspecified configuration. Lastly, you looked at the API and the dashboard available in Traefik. Since we have deployed Traefik, let’s configure it in the next chapter to handle a few endpoints.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.249.229