Now that you’ve seen how to install, interact with, and use Kubernetes to deploy and manage applications, we focus in this chapter on adapting Kubernetes to your needs as well as fixing bugs in Kubernetes. For this, you will need Go installed and access to the Kubernetes source code hosted on GitHub. We show how to compile Kubernetes (as a whole) and also show how to compile specific components like the client kubectl
. We also demonstrate how to use Python to talk to the Kubernetes API server and show how to extend Kubernetes with a custom resource definition.
You want to package your own Kubernetes binaries from source instead of downloading the official release binaries (see Recipe 2.4) or third-party artifacts.
Clone the Kubernetes Git repository and build from source.
If you are on a Docker host, you can use the quick-release
target of the root Makefile as shown here:
$ git clone https://github.com/kubernetes/kubernetes $ cd kubernetes $ make quick-release
This Docker-based build requires at least 4 GB of RAM to complete. Ensure that your Docker daemon has access to that much memory. On macOS, access the Docker for Mac preferences and increase the allocated RAM.
The binaries will be located in the _output/release-stage directory and a complete bundle will be in the _output/release-tars directory.
Or, if you have a Golang environment properly set up, use the release
target of the root Makefile:
$ git clone https://github.com/kubernetes/Kubernetes $ cd kubernetes $ make
The binaries will be located in the _output/bin directory.
Detailed Kubernetes developer guides
You want to build a specific component of Kubernetes from source, not all the components—for example, you only want to build the client kubectl
.
Instead of using make quick-release
or simply make
, as shown in Recipe 13.1, do make kubectl
.
There are targets in the root Makefile to build individual components. For example to compile kubectl
, kubeadm
, and hyperkube
, do this:
$ make kubectl $ make kubeadm $ make hyperkube
The binaries will be located in the _output/bin directory.
Install the Python kubernetes
module. This module is currently being developed in the Kubernetes incubator. You can install the module from source or from the Python Package Index (PyPi) site:
$ pip install kubernetes
With a Kubernetes cluster reachable using your default kubectl
context, you are now ready to use this Python module to talk to the Kubernetes API. For example, the following Python script lists all the pods and prints their names:
from
kubernetes
import
client
,
config
config
.
load_kube_config
()
v1
=
client
.
CoreV1Api
()
res
=
v1
.
list_pod_for_all_namespaces
(
watch
=
False
)
for
pod
in
res
.
items
:
(
pod
.
metadata
.
name
)
The config.load_kube_config()
call in this script will load your Kubernetes credentials and endpoint from your kubectl
config file. By default, it will load the cluster endpoint and credentials for your current context.
The Python client is built using the OpenAPI specification of the Kubernetes API. It is up to date and autogenerated. All APIs are available through this client.
Each API group corresponds to a specific class, so to call a method on an API object that is part of the /api/v1
API group, you need to instantiate the CoreV1Api
class. To use deployments, you will need to instantiate the extensionsV1beta1Api
class. All methods and corresponding API group instances can be found in the autogenerated README.
Use a CustomResourceDefinition
(CRD) as described [here].
Let’s say you want to define a custom resource of kind Function
. This represents a short-running Job
-like kind of resource, akin to what AWS Lambda offers, that is a Function-as-a-Service (FaaS, or sometimes misleadingly called “serverless”).
For a production-ready FaaS solution running on Kubernetes, see Recipe 14.7.
First, define the CRD in a manifest file called functions-crd.yaml:
apiVersion
:
apiextensions.k8s.io/v1beta1
kind
:
CustomResourceDefinition
metadata
:
name
:
function.example.com
spec
:
group
:
example.com
version
:
v1
names
:
kind
:
Function
plural
:
functions
scope
:
Namespaced
Then let the API server know about your new CRD (it can take several minutes to register):
$ kubectl create -f functions-crd.yaml customresourcedefinition "functions.example.com" created
Now that you have the custom resource Function
defined and the API server knows about it, you can instantiate it using a manifest called myfaas.yaml with the following content:
apiVersion
:
example.com/v1
kind
:
Function
metadata
:
name
:
myfaas
spec
:
code
:
"http://src.example.com/myfaas.js"
ram
:
100Mi
And create the myfaas
resource of kind Function
as per usual:
$ kubectl create -f myfaas.yaml function "myfaas" created $ kubectl get crd functions.example.com -o yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: creationTimestamp: 2017-08-13T10:11:50Z name: functions.example.com resourceVersion: "458065" selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions /functions.example.com uid: 278016fe-81a2-11e7-b58a-080027390640 spec: group: example.com names: kind: Function listKind: FunctionList plural: functions singular: function scope: Namespaced version: v1 status: acceptedNames: kind: Function listKind: FunctionList plural: functions singular: function conditions: - lastTransitionTime: null message: no conflicts found reason: NoConflicts status: "True" type: NamesAccepted - lastTransitionTime: 2017-08-13T10:11:50Z message: the initial names have been accepted reason: InitialNamesAccepted status: "True" type: Established $ kubectl describe functions.example.com/myfaas Name: myfaas Namespace: default Labels: <none> Annotations: <none> API Version: example.com/v1 Kind: Function Metadata: Cluster Name: Creation Timestamp: 2017-08-13T10:12:07Z Deletion Grace Period Seconds: <nil> Deletion Timestamp: <nil> Resource Version: 458086 Self Link: /apis/example.com/v1/namespaces/default /functions/myfaas UID: 316f3e99-81a2-11e7-b58a-080027390640 Spec: Code: http://src.example.com/myfaas.js Ram: 100Mi Events: <none>
To discover CRDs, simply access the API server. For example, using kubectl proxy
, you can access the API server locally and query the key space (example.com/v1
in our case):
$ curl 127.0.0.1:8001/apis/example.com/v1/ | jq . { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "example.com/v1", "resources": [ { "name": "functions", "singularName": "function", "namespaced": true, "kind": "Function", "verbs": [ "delete", "deletecollection", "get", "list", "patch", "create", "update", "watch" ] } ] }
Here you can see the resource along with the allowed verbs.
When you want to get rid of your custom resource instance myfaas
, simply delete it:
$ kubectl delete functions.example.com/myfaas function "myfaas" deleted
As you’ve seen, it is straightforward to create a CRD. From an end user’s point of view, CRDs present a consistent API and are more or less indistinguishable from native resources such as pods or jobs. All the usual commands, such as kubectl get
and kubectl delete
, work as expected.
Creating a CRD is, however, really less than half of the work necessary to fully extend the Kubernetes API. On their own, CRDs only let you store and retrieve custom data via the API server in etcd. You need to also write a custom controller that interprets the custom data expressing the user’s intent and establishes a control loop comparing the current state with the declared state, and tries to reconcile both.
Up until v1.7, what are now known as CRDs were called third-party resources (TPRs). If you happen to have a TPR, strongly consider migrating it now.
The main limitations of CRDs (and hence the reasons you might want to use a user API server in certain cases) are:
Only a single version per CRD is supported, though it is possible to have multiple versions per API group (that means you can’t convert between different representations of your CRD).
CRDS don’t support assigning default values to fields in v1.7 or earlier.
Validation of the fields defined in a CRD specification is possible only from v1.8.
It’s not possible to define subresources, such as status
resources.
Stefan Schimanski and Michael Hausenblas’s blog post “Kubernetes Deep Dive: API Server – Part 3a”
Aaron Levy, “Writing a Custom Controller: Extending the Functionality of Your Cluster”, KubeCon 2017
Tu Nguyen’s article “A Deep Dive into Kubernetes Controllers”
Yaron Haviv’s article “Extend Kubernetes 1.7 with Custom Resources”
3.144.238.20