Trying out CRI-O

Let's look at some installation methods so you can give CRI-O a try on your own. In order to get started, you'll need a few things, including runc or another OCI compatible runtime, as well as socat, iproute, and iptables. There's a few options for running CRI-O in Kubernetes:

  • In a full-scale cluster, using kube-adm and systemd to leverage the CRI-O socket with --container-runtime-endpoint /var/run/crio/crio.sock
  • With Minikube, by starting it up with specific command-line options
  • On atomic with atomic install --system-package=no -n cri-o --storage ostree registry.centos.org/projectatomic/cri-o:latest

If you'd like to build CRI-O from source, you can run the following on your laptop. You need some dependencies installed in order to make this build phase work. First, run the following commands to get your dependencies installed.

The following commands are for Fedora, CentOS, and RHEL distributions:

yum install -y 
btrfs-progs-devel
device-mapper-devel
git
glib2-devel
glibc-devel
glibc-static
go
golang-github-cpuguy83-go-md2man
gpgme-devel
libassuan-devel
libgpg-error-devel
libseccomp-devel
libselinux-devel
ostree-devel
pkgconfig
runc
skopeo-containers

These commands are to be used for Debian, Ubuntu, and related distributions:

apt-get install -y 
btrfs-tools
git
golang-go
libassuan-dev
libdevmapper-dev
libglib2.0-dev
libc6-dev
libgpgme11-dev
libgpg-error-dev
libseccomp-dev
libselinux1-dev
pkg-config
go-md2man
runc
skopeo-containers

Secondly, you'll need to grab the source code like so:

git clone https://github.com/kubernetes-incubator/cri-o # or your fork
cd cri-o

Once you have the code, go ahead and build it:

make install.tools
make
sudo make install

You can use additional build flags to add thing such as seccomp, SELinux, and apparmor with this format: make BUILDTAGS='seccomp apparmor'.

You can run Kubernetes locally with the local-up-cluster.sh script in Kubernetes. I'll also show you how to run this on Minikube.

First, clone the Kubernetes repository:

git clone https://github.com/kubernetes/kubernetes.git

Next, you'll need to start the CRI-O daemon and run the following command to get spin up your cluster using CRI-O:

CGROUP_DRIVER=systemd 
CONTAINER_RUNTIME=remote
CONTAINER_RUNTIME_ENDPOINT='unix:///var/run/crio/crio.sock --runtime-request-timeout=15m'
./hack/local-up-cluster.sh
If you have a running cluster, you can also use the instructions, available at the following URL, to switch the runtime from Docker to CRI-O: https://github.com/kubernetes-incubator/cri-o/blob/master/kubernetes.md/.

Let's also check how to use CRI-O on Minikube, which is one of the easiest ways to get experimenting:

minikube start 
--network-plugin=cni
--extra-config=kubelet.container-runtime=remote
--extra-config=kubelet.container-runtime-endpoint=/var/run/crio/crio.sock
--extra-config=kubelet.image-service-endpoint=/var/run/crio/crio.sock
--bootstrapper=kubeadm

Lastly, we can use our GCP platform to spin up a cluster with CRI-O and start experimenting:

gcloud compute instances create cri-o 
--machine-type n1-standard-2
--image-family ubuntu-1610
--image-project ubuntu-os-cloud

Let's use these machines to run through a quick tutorial. SSH into the machine using gcloud compute ssh cri-o.

Once you're on the server, we'll need to install the cri-o, crioctl, cni, and runc programs. Grab the runc binary first:

wget https://github.com/opencontainers/runc/releases/download/v1.0.0-rc4/runc.amd64

Set it executable and move it to your path as follows:

chmod +x runc.amd64
sudo mv runc.amd64 /usr/bin/runc

You can see it's working by checking the version:

$ runc -version
runc version 1.0.0-rc4
commit: 2e7cfe036e2c6dc51ccca6eb7fa3ee6b63976dcd
spec: 1.0.0

You'll need to install the CRI-O binary from source, as it's not currently shipping any binaries.

First, download the latest binary release and install Go:

wget https://storage.googleapis.com/golang/go1.8.5.linux-amd64.tar.gz
sudo tar -xvf go1.8.5.linux-amd64.tar.gz -C /usr/local/
mkdir -p $HOME/go/src
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

This should feel familiar, as you would install Go the same way for any other project. Check your version:

go version
go version go1.8.5 linux/amd64

Next up, get crictl using the following commands:

go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
cd $GOPATH/src/github.com/kubernetes-incubator/cri-tools
make
make install

After that's downloaded, you'll need to build CRI-O from source:

sudo apt-get update && apt-get install -y libglib2.0-dev 
libseccomp-dev
libgpgme11-dev
libdevmapper-dev
make
git

Now, get CRI-O and install it:

go get -d github.com/kubernetes-incubator/cri-o
cd $GOPATH/src/github.com/kubernetes-incubator/cri-o
make install.tools
Make
sudo make install

After this is complete, you'll need to create configuration files with sudo make install.config. You need to ensure that you're using a valid registry option in the /etc/crio/cirio.conf file. An example of this looks like the following:

registries = ['registry.access..com', 'registry.fedoraproject.org', 'docker.io']

At this point, we're ready to start the CRI-O system daemon, which we can do by leveraging systemctl. Let's create a crio.service:

$ vim /etc/systemd/system/crio.service

Add the following text:

[Unit]
Description=OCI-based implementation of Kubernetes Container Runtime Interface
Documentation=https://github.com/kubernetes-incubator/cri-o

[Service]
ExecStart=/usr/local/bin/crio
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Once that's complete, we can reload systemctl and enable CRI-O:

$ sudo systemctl daemon-reload && 
sudo systemctl enable crio &&
sudo systemctl start crio

After this is complete, we can validate whether or not we have a working install of CRI-O by checking the version of the endpoint as follows:

$ sudo crictl --runtime-endpoint unix:///var/run/crio/crio.sock version
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.10.0-dev
RuntimeApiVersion: v1alpha1

Next up, we'll need to grab the latest version of the CNI plugin, so we can build and use it from source. Let's use Go to grab our source code:

go get -d github.com/containernetworking/plugins
cd $GOPATH/src/github.com/containernetworking/plugins
./build.sh

Next, install the CNI plugins into your cluster:

sudo mkdir -p /opt/cni/bin
sudo cp bin/* /opt/cni/bin/

Now, we can configure the CNI so that CRI-O can use it.  First, make a directory to store the configuration, then we'll set two configuration files as follows:

sudo mkdir -p /etc/cni/net.d

Next, you'll want to create and compose 10-mynet.conf:

sudo sh -c 'cat >/etc/cni/net.d/10-mynet.conf <<-EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.88.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF'

And then, compose the loopback interface as follows:

sudo sh -c 'cat >/etc/cni/net.d/99-loopback.conf <<-EOF
{
"cniVersion": "0.2.0",
"type": "loopback"
}
EOF'

Next up, we'll need some special containers from Project Atomic to get this working. skopeo is a command-line utility that is OCI-compliant and can perform various operations on container images and image repositories. Install the containers as follows:

sudo add-apt-repository ppa:projectatomic/ppa
sudo apt-get update
sudo apt-get install skopeo-containers -y

Restart CRI-O to pick up the CNI configuration with sudo systemctl restart crio. Great! Now that we have these components installed, let's build something!

First off, we'll create a sandbox using a template policy from the Kubernetes incubator.

This template is NOT production ready!

Change first to the CRI-O source tree with the template, as follows:

cd $GOPATH/src/github.com/kubernetes-incubator/cri-o

Next, you'll need to create and capture the pod ID:

sudo mkdir /etc/containers/
sudo cp test/policy.json /etc/containers

You can use critcl to get the status of the pod as follows:

sudo crictl inspectp --output table $POD_ID
ID: cd6c0883663c6f4f99697aaa15af8219e351e03696bd866bc3ac055ef289702a
Name: podsandbox1
UID: redhat-test-crio
Namespace: redhat.test.crio
Attempt: 1
Status: SANDBOX_READY
Created: 2016-12-14 15:59:04.373680832 +0000 UTC
Network namespace: /var/run/netns/cni-bc37b858-fb4d-41e6-58b0-9905d0ba23f8
IP Address: 10.88.0.2
Labels:
group -> test
Annotations:
owner -> jwhite
security.alpha.kubernetes.io/seccomp/pod -> unconfined
security.alpha.kubernetes.io/sysctls ->
kernel.shm_rmid_forced=1,net.ipv4.ip_local_port_range=1024 65000
security.alpha.kubernetes.io/unsafe-sysctls -> kernel.msgmax=8192

We'll use the crictl tool again to pull a container image for a Redis server:

sudo crictl pull quay.io/crio/redis:alpine
CONTAINER_ID=$(sudo crictl create $POD_ID test/testdata/container_redis.json test/testdata/sandbox_config.json)

Next, we'll start and check the status of the Redis container as follows:

sudo crictl start $CONTAINER_ID
sudo crictl inspect $CONTAINER_ID

At this point, you should be able to telnet into the Redis container to test its functionality:

telnet 10.88.0.2 6379
Trying 10.88.0.2…
Connected to 10.88.0.2.
Escape character is '^]'.

Nicely done—you've now created a pod and container manually, using some of the core abstractions of the Kubernetes system! You can stop the container and shut down the pod with the following commands:

sudo crictl stop $CONTAINER_ID
sudo crictl rm $CONTAINER_ID
sudo crictl stopp $POD_ID
sudo crictl rmp $POD_ID
sudo crictl pods
sudo crictl ps
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.240.26