Testing, releases, and cutovers

The rolling update feature can work well for a simple blue-green deployment scenario. However, in a real-world blue-green deployment with a stack of multiple applications, there can be a variety of inter-dependencies that require in-depth testing. The update-period command allows us to add a timeout flag where some testing can be done, but this will not always be satisfactory for testing purposes.

Similarly, you may want partial changes to persist for a longer time and all the way up to the load balancer or service level. For example, you may wish to run an A/B test on a new user interface feature with a portion of your users. Another example is running a canary release (a replica in this case) of your application on new infrastructure, such as a newly added cluster node.

Let's take a look at an A/B testing example. For this example, we will need to create a new service that uses sessionAffinity. We will set the affinity to ClientIP, which will allow us to forward clients to the same backend pod. The following listing pod-AB-service.yaml is the key if we want a portion of our users to see one version while others see another:

apiVersion: v1 
kind: Service
metadata:
name: node-js-scale-ab
labels:
service: node-js-scale-ab
spec:
type: LoadBalancer
ports:
- port: 80
sessionAffinity: ClientIP
selector:
service: node-js-scale-ab

Create this service as usual with the create command, as follows:

$ kubectl create -f pod-AB-service.yaml

This will create a service that will point to our pods running both version 0.2 and 0.3 of the application. Next, we will create the two ReplicationControllers that create two replicas of the application. One set will have version 0.2 of the application, and the other will have version 0.3, as shown in the listing pod-A-controller.yamland pod-B-controller.yaml:

apiVersion: v1 
kind: ReplicationController
metadata:
name: node-js-scale-a
labels:
name: node-js-scale-a
version: "0.2"
service: node-js-scale-ab
spec:
replicas: 2
selector:
name: node-js-scale-a
version: "0.2"
service: node-js-scale-ab
template:
metadata:
labels:
name: node-js-scale-a
version: "0.2"
service: node-js-scale-ab
spec:
containers:
- name: node-js-scale
image: jonbaier/pod-scaling:0.2
ports:
- containerPort: 80
livenessProbe:
# An HTTP health check
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
# An HTTP health check
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
apiVersion: v1 
kind: ReplicationController
metadata:
name: node-js-scale-b
labels:
name: node-js-scale-b
version: "0.3"
service: node-js-scale-ab
spec:
replicas: 2
selector:
name: node-js-scale-b
version: "0.3"
service: node-js-scale-ab
template:
metadata:
labels:
name: node-js-scale-b
version: "0.3"
service: node-js-scale-ab
spec:
containers:
- name: node-js-scale
image: jonbaier/pod-scaling:0.3
ports:
- containerPort: 80
livenessProbe:
# An HTTP health check
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
# An HTTP health check
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1

Note that we have the same service label, so these replicas will also be added to the service pool based on this selector. We also have livenessProbe and readinessProbe defined to make sure that our new version is working as expected. Again, use the create command to spin up the controller:

$ kubectl create -f pod-A-controller.yaml
$ kubectl create -f pod-B-controller.yaml

Now, we have a service balancing both versions of our app. In a true A/B test, we would now want to start collecting metrics on the visits to each version. Again, we have sessionAffinity set to ClientIP, so all requests will go to the same pod. Some users will see v0.2, and some will see v0.3.

Because we have sessionAffinity turned on, your test will likely show the same version every time. This is expected, and you would need to attempt a connection from multiple IP addresses to see both user experiences with each version.

Since the versions are each on their own pod, one can easily separate logging and even add a logging container to the pod definition for a sidecar logging pattern. For brevity, we will not cover that setup in this book, but we will look at some of the logging tools in Chapter 8, Monitoring and Logging.

We can start to see how this process will be useful for a canary release or a manual blue-green deployment. We can also see how easy it is to launch a new version and slowly transition over to the new release.

Let's look at a basic transition quickly. It's really as simple as a few scale commands, which are as follows:

$ kubectl scale --replicas=3 rc/node-js-scale-b
$ kubectl scale --replicas=1 rc/node-js-scale-a
$ kubectl scale --replicas=4 rc/node-js-scale-b
$ kubectl scale --replicas=0 rc/node-js-scale-a
Use the get pods command combined with the -l filter in between the scale commands to watch the transition as it happens.

Now, we have fully transitioned over to version 0.3 (node-js-scale-b). All users will now see version 0.3 of the site. We have four replicas of version 0.3 and none of 0.2. If you run a get rc command, you will notice that we still have an ReplicationControllers for 0.2 (node-js-scale-a). As a final cleanup, we can remove that controller completely, as follows:

$ kubectl delete rc/node-js-scale-a
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.81.154