Chapter 8. Encryption of Data in Transit

As you move mission critical workloads to production, it is very likely that you will need to implement encryption of data in transit. It is a very important requirement for certain types of data to meet compliance requirements.and also a good security practice.

Encryption of data in transit is a requirement defined by many compliances standards, such as HIPAA, GDPR, and PCI. The specific requirements vary somewhat, for example PCI DSS (Payment Card Industry Data Security Standard) has rules around encryption of cardholder data while in transit. Depending on the specific compliance standard, this may mean you need to ensure data in transit between the applications or microservices hosted in Kubernetes must be encrypted using a recognized strong encryption algorithm.

Depending on the architecture of your application or microservices, it may be that not all data being sent over the network is classified as sensitive, so theoretically you might strictly only need to encrypt a subset of the data in transit. However, from the perspective of operational simplicity and ease of compliance auditing, it often makes sense to encrypt all data in transit between your microservices, rather than trying to do it selectively.

Even if you do not have strong requirements imposed by external compliance standards, it can still be a very good practice to encrypt data in transit. Without encryption, malicious actors with network access could see sensitive information. How you assess this risk may vary depending on whether you are using public cloud or on-prem / private cloud infrastructure, and the internal security processes you have in place as an organization. In most cases, if you are handling sensitive data, then you should really be encrypting data in transit.

If you are providing services which are accessed by clients on the public internet, then the standard practice of using HTTPS apply to Kubernetes. Depending on your microservice architecture these HTTPS connections can be terminated on the destination microservice, or they may be terminated by a Kubernetes Ingress solution, either as an in-cluster Ingress pods (e.g. as when using the NGINX Ingress Controller) or an out-of-cluster application load balancer (e.g. when using the AWS Load Balancer Controller). Note that if using an out-of-cluster application load balancer, it’s important to still make sure that the connection from the load balancer to the destination microservice uses HTTPS to avoid an unencrypted network hop.

Within the cluster itself, there are three broad approaches to encrypting data in transit:

  • Build encryption capabilities into your application / microservices code.

  • Use side-car or service mesh based encryption to encrypt at the application layer without needing code changes to your applications / microservices.

  • Use network level encryption, again without the need to code changes to your applications / microservices.

We will explore the pros and cons of each approach.

Building encryption into your code

There are libraries to encrypt network connections for most programming languages, so in theory you could choose to build encryption into your microservices as you build them. For example, using HTTPS SSL/TLS or even mTLS (mutual TLS) to validate the identity of both ends of the connection.

However, this approach has a number of drawbacks:

  • In many organizations, different microservices are built using different programming languages, with each microservice development team using the language that is most suited for that particular microservice and team’s expertise. For example, a front-end web UI microservice might be written using node.js, and a middle-layer microservice might be written in Python or Golang. As each programming language has its own set of libraries available for encryption, this means that the implementation effort increases, potentially with each microservices team having to implement encryption for their microservice rather than being able to leverage a single shared implementation across all microservices.

  • Building on this idea of not having a single shared implementation for encryption, the same applies to configuration of the microservices. In particular, how the microservice reads its credentials required for encryption.

  • In addition to the effort involved in developing and maintaining all this code, the more implementations you have, the more likely that one of the implementations will have bugs in it that lead to security flaws.

  • It is not uncommon for older versions of encryption libraries to have known vulnerabilities which are fixed in new versions. By the time a new version is released to address any newly discovered vulnerability, the vulnerability is public knowledge. This in turn increases the number of attacks targeted at exploiting the vulnerability. To mitigate against this it is essential to update any microservices that use the library as soon as possible. If you are running many microservices, this may represent a significant development and test effort, since the code for each microservice needs to be updated and tested individually. On top of that, if you don’t have a lot of automation built into your CI/CD process then there may also be the operational headache of updating each microservice version with the live cluster.

  • Many microservices are based on third-party open source code (either in part or for the whole of the microservice). Often this means you are limited to the specific encryption options supported by the third-party code, and in many cases the specific configuration mechanisms the third-party code supports. You also become dependent on the upstream maintainers of the third-party code to keep the open source project up to date and address vulnerabilities as they are discovered.

  • Finally it is important to note that there is often operational overhead when it comes to provisioning encryption settings and credentials across disparate implementations and their various configuration paradigms.

The bottom line then, is that while it is possible to build encryption into each of your microservices, the effort involved and the risk of unknowingly introducing security flaws (due to code or design issues or outdated encryption libraries) can make this approach feel pretty daunting and unattractive.

Side-car or service mesh encryption

An alternative architectural approach to encrypting traffic between microservices at the application layer is to use the side-car design pattern. The side-car is a container that can be included in every Kubernetes pod alongside the main container(s) that implement the microservice. The side-car intercepts connections being made to/from the microservice and performs the encryption on behalf of the microservice, without any code changes in the microservice itself. The side-car can either be explicitly included in the pod specification, or it can be injected into the pod specification using an admission controller at creation time.

Compared to building encryption into each microservice, the side-car approach has the advantage that a single implementation of encryption can be used across all microservices, independent of the programming language the microservice might have been written in. It means there is a single implementation to keep up to date, which in turn makes it easier to roll out vulnerability fixes or security improvements across all microservices with minimal effort.

You could in theory develop such a side-car yourself. But unless you have some niche requirement then it would usually be better to use one of the many existing free open-source implementations already available which have had a significant amount of security review and in-field hardening.

One popular example is the Envoy proxy, originally developed by the team at Lyft, which is often used to encrypt microservice traffic using mTLS (mutual TLS). Mutual TLS means that both the source and destination microservices provide credentials as part of setting up the connection, so each microservice can be sure it is talking to the other intended microservice. Envoy has a rich configuration model, but does not itself provide a control or management plane, so you would need to write your own automation processes to configure Envoy to work in the way you want it to.

Rather than writing this automation yourself, an alternative approach is to use one of the many service mesh solutions which follow a side-car model. For example, the Istio service mesh provides a packaged solution using Envoy as the side-car integrated with the Istio control and management plane. Service meshes provide many features beyond encryption, including service routing and visibility. While service meshes are becoming increasingly popular, a widely acknowledged potential downside of their richer feature set is it can introduce operational complexity, or make the service mesh harder to understand at a nuts and bolts level with a greater number of moving parts. Another downside is the security risk associated with the sidecar design pattern where the sidecar is part of every application pod and the additional complexity of managing sidecars ( for e.g. a CVE may require you to update sidecars and this is not a trivial update as it impacts all applications)

Network layer encryption

Implementing encryption with the microservice or using a side-car model is often referred to as application layer encryption. Essentially the application (microservice or the side-car) handles all of the encryption and the network is just responsible for sending and receiving packets, without being aware the encryption is happening at all.

An alternative to application layer encryption is to implement encryption within the network layer. From the application’s perspective, it is sending unencrypted data, and it is the network layer that takes responsibility for encrypting the packets before they are transmitted across the network.

One of the main standards for network layer encryption that has been widely used throughout the industry for many years is IPsec. Most IPsec implementations support a broad range of encryption algorithms, such as AES encryption, with varying key lengths. IPsec is often paired with IKE (Internet Key Exchange) as a mechanism for managing and communicating the host credentials (certificates and keys) that IPsec needs to work. There are a number of open source projects, such as the popular strongSong solution, that provide IKE implementations and make creating and managing IPsec networks easier.

Some enterprises choose to use solutions such as strongSwan as their preferred solution for managing IPsec, which they then run Kubernetes on top of. In this case Kubernetes is not really aware of IPsec. Even with projects such as strongSwan helping make IPsec easier to set up and manage, many regard IPsec as being quite heavyweight and tricky to manage from an overall operational perspective.

One alternative to IPsec is WireGuard. WireGuard is a newer encryption implementation designed to be extremely simple yet fast, using state-of-the-art cryptography. Architecturally it is simpler, and initial testing indicates that it does out-perform IPsec in various circumstances. It should be noted though that development continues on both Wireguard and IPsec, and in particular as advances are made to cryptographic algorithms, the comparative performance of both will likely evolve.

Rather than setting up and managing IPsec or WireGuard yourself, an operationally easier approach for most organizations is to use a Kubernetes network plugin with built-in support for encryption. There are a variety of Kubernetes network plugins that support different types of encryption, with varying performance characteristics.

If you are running network intensive workloads then it is important to consider the performance cost of encryption. This cost applies whether you are encrypting at the application layer or at the network layer, but the choice of encryption technology can make a significant difference to performance. For example, the following chart shows independent benchmark results for four popular Kubernetes network plugins (the most recent benchmarks available at the time of writing, published in 2020).

Figure 8-1.  

Using a Kubernetes network plugin that supports encryption is typically significantly simpler from an operational standpoint, with many fewer moving parts than adopting a service mesh, and significantly less effort than building encryption into your application / microservice code. If your primary motivation for potentially adopting a service mesh is around security through encryption, then using a Kubernetes network plugin that supports network layer encryption along with Kubernetes network policies is likely to be significantly easier to manage and maintain. Please note that we will cover other aspects of service mesh like Observability in the chapter on Observability.

Conclusion

In this chapter we presented various options to implement encryption of data in transit and various approaches to implement encryption in a Kubernetes cluster. We hope this enables you to pick the option most suited for your use case.

  • As you move mission critical workloads to production, for certain types of data, you will need to implement encryption for data in transit. We recommend implementing encryption of data in transit even if compliance requirements do not require you to encrypt all data.

  • We covered the well known methods of how you can implement encryption, application layer encryption, sidecar based encryption using a servicemesh, network layer encryption.

  • Based on operational simplicity and better performance, we recommend network layer encryption.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.97.189