Security
This chapter describes some of the recommended practices for implementing IBM Cloud Private security. This section has the following sections:
6.1 How IBM Cloud Private handles authentication
There are two types of entities (clients) that can authenticate with IBM Cloud Private. These are:
Actual humans (users)
Pods (service account)
Users are meant to be managed by an external system, such as a LDAP (Lightweight Directory Access Protocol), but pods use a mechanism called the service account, which is created and stored in the cluster as a service account resource.
Pods can authenticate by sending the contents of the file /var/run/secrets/kubernetes.io/serviceaccount/token, which is mounted into each container’s file system through a secret volume. Every pod is associated with a service account, which represents the identity of the app running in the pod. The token file holds the service account’s authentication token. Service accounts are resources just like pods, secrets, configmaps, and so on, and are scoped to the individual namespaces. A default service account is automatically created for each namespace.
You can assign a service account to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly, the pod will use the default service account in the namespace.
IBM Cloud Private supports the following two authentication protocols for users:
OIDC (OpenID Connect) based authentication
SAML (Security Assertion Markup Language) based federated authentication
6.1.1 OIDC-based authentication
IBM Cloud Private provides an OIDC-based authentication through the WebSphere Liberty Server. This is backed by the Liberty-based OIDC server for providing local and LDAP directory-based authentication.
Lightweight Directory Access Protocol (LDAP) support
IBM Cloud Private can be configured with a single or multiple LDAP servers for authentication and authorization. IBM Cloud Private supports the following LDAP types:
IBM Tivoli® Directory Server
IBM Lotus® Domino®
IBM SecureWay Directory Server
Novell eDirectory
Sun Java System Directory Server
Netscape Directory Server
Microsoft Active Directory
With IBM Cloud Private, you can authenticate across multiple LDAPs. You can add multiple directory entries to the LDAP config in the server.xml file. Liberty automatically resolves the domain name from the login and authenticates against the targeted LDAP directory. IBM Cloud Private users and user groups are associated with an enterprise directory during the time of the user and user group onboarding via import. When the new LDAP directory entry is created, the domain name also gets added as a new entry. At the time of login, you can specify the domain against which this authentication should be validated.
It is possible to have a mix of directory types, for example Active Directory, IBM Tivoli Directory Server and OpenLDAP. Role-based access control (RBAC) is enforced on the LDAP domain. Cluster administrators have access to all LDAP domains, whereas team administrators are restricted to only those domains they are authorized to.
For more information on configuring LDAP connection with IBM Cloud Private see the following document:
6.1.2 SAML-based authentication
IBM Cloud Private can be configured to use SAML (Security Assertion Markup Language) based authentication from an enterprise SAML server. The steps to configure SAML-based federated authentication with IBM Cloud Private are discussed in the following document:
For configuring single sign-on see the following link:
6.2 How authorization is handled in IBM Cloud Private
IBM Cloud Private supports role-based access control (RBAC) as the authorization mechanism. RBAC is enforced in IBM Cloud Private through teams. A team is an entity that groups users and resources. The resources can be Kubernetes type resources such as namespace, pod, and broker or a non-Kubernetes type, such as the Helm chart, DB instance, and cloud connection. The assignment of the resources to the team happens through the resource CRNs. The responsible services have to expose the resource CRNs through an API so that they become available on the team’s Add Resource dialogue.
The Kubernetes resources, such as namespaces, are exposed through the https://icp-ip:8443/idmgmt/identity/api/v1/k8resources/getK8Resources?resourceType=namspaceAPI.
It is possible to fetch the resources that are attached to a specific user through their teams by using the https://icp-ip:8443/idmgmt/identity/api/v1/users/{id}/getTeamResourcesAPI.
6.2.1 Cloud resource names (CRN) specification
IBM Cloud Private follows the CRN convention:
crn:version:cname:ctype:service-name:region:scope:service-instance:resource-type:resource-instance.
Let us take an example. We will create a team in IBM Cloud Private, onboard LDAP user in the team and assign a role to the user.
1. Create a team in IBM Cloud Private UI name it icp-team as shown in Figure 6-1.
Figure 6-1 Create team icp-team
2. Add user carlos to the team icp-team and assign the Administrator role for this team. See Figure 6-2.
Figure 6-2 Add user carlos to team icp-team
3. Assign namespace development to icp-team as shown in Figure 6-3.
A
Figure 6-3 Assign namespace development to iicp-team
This will conclude the scenario. For details on which role has permission to perform actions on which resources, see the url below:
Within a team, each user or user group can have only one role. However, a user might have multiple roles within a team when you add a user both individually and also as a member of a team’s group. In that case, the user can act based on the highest role that is assigned to the user. For example, if you add a user as an Administrator and also assign the Viewer role to the user’s group, the user can act as an Administrator for the team.
6.2.2 Role-based access control (RBAC) for pods
Every pod has an associated service account. Always associate service account with a fine-grained RBAC policy. Only grant this service account the actions and resources that the pod requires to function properly.
6.3 Isolation on IBM Cloud Private
IBM Cloud Private offers multi-tenancy support through user, compute, and network isolation within a cluster. Dedicated physical and logical resources are required for the cluster to achieve workload isolation. Multi-tenancy requires applying various isolation techniques that are described in this topic. User, compute, and network isolation are enforced by confining workload deployments to virtual and physical resources. The enforced isolation also allows the cluster administrator to control the footprint that is allocated to various teams based on their requirements.
The following are some of the key prerequisites to achieve isolation of deployments on cluster nodes.
IBM Cloud Private provides several levels of multi-tenancy. The cluster administrator must analyze workload requirements to determine which levels are required. The following isolation features can be used to satisfy these requirements:
Host groups: As part of preinstall configuration, the cluster administrator can configure groups of nodes to worker host groups and proxy host groups. This operation also involves pre-planning the namespaces, as each host group is mapped to a namespace.
VLAN subnet: The network infrastructure administrator can plan various subnet ranges for each node or host groups before IBM Cloud Private installation.
Multiple LDAP supports: Multiple LDAP servers can be configured and the cluster administrator can form teams of users and user groups from various LDAP domains. For the steps to add multiple LDAP registration see https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/user_management/iso_ldap.html.
Namespaces: The cluster administrator can create namespaces for logical grouping of resources. These namespaces can be created after the IBM Cloud Private installation. If the cluster administrator chooses to have host groups, then the namespace planning is done before installation.
Network ingress controllers: The cluster administrator must plan the ingress controllers before installation to allow the installer to create ingress controllers for each controller that are mapped to one host group and one namespace.
Users, user groups and teams: The users and user groups can be on boarded on to IBM Cloud platform and they can be grouped in teams that are mapped to namespaces and other resources.
Network policies: Team administrators and operators can create network policies to create firewall rules at the namespace scope.
Pod security policies: The cluster administrator can create policies that either allow or deny container images from running in select namespaces or nodes.
6.3.1 Scenarios
Let us consider the following scenario. In an organization there are two teams, team1, team2 who want to enforce user, compute, and network isolation by confining workload deployments to virtual and physical resources. During the IBM Cloud Private installation we created isolated worker and proxy nodes. Post installation, we onboarded team members from both teams in IBM Cloud Private and assigned them namespaces ns-team1, ns-team2 respectively.
Namespace ns-team1 is bound to the worker node workerteam1 and the proxy node proxyteam1.
Namespace ns-team2 is bound to the worker node workerteam2 and the proxy node proxyteam2.
Example 6-1 shows that team1 and team2 has dedicated worker and proxy nodes.
Example 6-1 Output of the kubectl get nodes command
root@acerate1:~# kubectl get nodes
 
NAME STATUS ROLES AGE VERSION
172.16.16.233 Ready workerteam2 19h v1.12.4+icp-ee
172.16.16.234 Ready proxyteam2 19h v1.12.4+icp-ee
172.16.236.223 Ready etcd,management,master,proxy,worker 20h v1.12.4+icp-ee
172.16.237.124 Ready workerteam1 19h v1.12.4+icp-ee
172.16.237.225 Ready proxyteam1 19h v1.12.4+icp-ee
For both teams users and groups information are onboarded from LDAP into IBM Cloud Private. Deployment that is done from users of team1 will go under the namespace ns-team1 and will be deployed on the worker node workerteam1 and use the proxy node proxyteam1.
Deployment that is done from users of team2 will go under the namespace ns-team2 and will be deployed on the worker node workerteam2 and use the proxy node proxyteam2. Figure 6-4 shows that team1 and team2 from LDAP are onboarded in IBM Cloud Private.
Figure 6-4 Team1 and team2 from LDAP have been onboarded
So, in the above scenario we were able to achieve the following isolation features:
Host groups: We are able to isolate proxy node and worker node for each team.
VLAN subnet: Worker and proxy nodes are in same subnet for each team. Team1 is using the subnet 172.16.236.0/24 and team2 is using the subnet 172.16.16.0/24.
Namespaces: Both teams have been assigned to different namespaces for logical grouping of resources. This means team1 has been assigned to the namespace ns-team1 and team2 has been assigned to the namespace ns-team2.
Network ingress controllers: Both teams have isolated proxy nodes, so they will use an ingress controller from their respective proxy nodes.
Users, user groups and teams: Both teams can onboard any group and users from LDAP in a team, Cluster administrator can create new teams.
Note that the pods deployed from team1 can still talk to the pods of team2. For example both teams deployed the node js sample application from the IBM Cloud Private Helm chart. To stop the communication between the pods of team1 and team2 we should execute the following steps in this order.
Example 6-2 Getting he details of the pod deployed by team1
root@acerate1:~# kubectl get po -n ns-team1 -o wide | awk {' print $1" " $6" " $7'} | column -t
NAME IP NODE
nodejs-deployment-team1-nodejssample-nodejs-c856dff96-z84gv 10.1.5.5 172.16.237.124
Example 6-3 Getting the details of the pod deployed by team2
root@acerate1:~# kubectl get po -n ns-team2 -o wide | awk {' print $1" " $6" " $7'} | column -t
NAME IP NODE
nodejs-deployment-team2-nodejssample-nodejs-7c764746b9-lmrcc 10.1.219.133 172.16.16.233
Example 6-4 Getting the service details of the pod deployed by team1
root@acerate1:~# kubectl get svc -n ns-team1 | awk {' print $1" " $5'} | column -t
NAME PORT(S)
nodejs-deployment-team1-nodejssample-nodejs 3000:31061/TCP
Example 6-5 Getting the service details of the pod deployed by team2
root@acerate1:~# kubectl get svc -n ns-team2 | awk {' print $1" " $5'} | column -t
NAME PORT(S)
nodejs-deployment-team2-nodejssample-nodejs 3000:30905/TCP
Example 6-6 Accessing the pod of team2 from the pod of team1
root@acerate1:~# kubectl exec -it nodejs-deployment-team1-nodejssample-nodejs-c856dff96-z84gv -n ns-team1 -- /bin/bash -c "curl 10.1.219.133:3000"
<!--
Licensed Materials - Property of IBM
(C) Copyright IBM Corp. 2018. All Rights Reserved.
US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
-->
<!DOCTYPE html>
<html lang="en">
<head>
....
....
<div class="footer">
<p>Node.js is a trademark of Joyent, Inc. and is used with its permission. We are not endorsed by or affiliated with Joyent.</p>
</div>
</div>
</body>
At this point, we need to create network policies to create firewall rules at the namespace scope. This will stop the communication between pods of team1 and team2. Example 6-7, Example 6-8, Example 6-9 on page 245, Example 6-10 on page 245, and Example 6-11 on page 245 show how to do it.
Example 6-7 Patching the namespace for team1 with label name: ns-team1
apiVersion: v1
kind: Namespace
metadata:
name: ns-team1
labels:
name: ns-team1
Example 6-8 Patching the namespace for team2 with label name: ns-team2
apiVersion: v1
kind: Namespace
metadata:
name: ns-team2
labels:
name: ns-team2
Example 6-9 Creating a network policy for team1 to stop communication from any pods except its own pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: networkpolicy-team1
namespace: ns-team1
spec:
policyTypes:
- Ingress
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ns-team1
Example 6-10 Creating a network policy for team2 to stop communication from any pods except its own pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: networkpolicy-team2
namespace: ns-team2
spec:
policyTypes:
- Ingress
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ns-team2
Example 6-11 Trying to access the pod of team2 from the pod of team1 or vice versa will fail
root@acerate1:~# kubectl exec -it nodejs-deployment-team1-nodejssample-nodejs-c856dff96-z84gv -n ns-team1 -- /bin/bash -c "curl 10.1.219.133:3000"
[curl: (7) Failed to connect to 10.1.219.133 port 3000: Connection timed out
If team1 wants to use a different level of security for its pods then team2, they can create its own pod security policy and bind it with namespace. For detailed steps see the following URL:
6.4 The significance of the admission controller in IBM Cloud Private
After authentication and authorization, the next step that IBM Cloud Private performs is the admission control. By the time we have reached this phase, it has already been determined that the request came from an authenticated user and that the user is authorized to perform this request. What we care about right now is whether the request meets the criteria for what we consider to be a valid request and if not, what actions need to be taken. Should we reject the request entirely, or should we alter it to meet our business policies?
Admission control is where an administrator can really start to wrangle the users’ workloads. With admission control, you can limit the resources, enforce policies, and enable advanced features.
In the following sections you can find some examples of the admission controller.
6.4.1 Pod security policy
The pod security policies can be used to enforce container image security for the pods in your cluster. A pod security policy is a cluster level resource that controls the security sensitive aspects of a pod’s specification and the set of conditions that must be met for a pod to be admitted into the cluster. The pod security policy is applied to the namespace by creating a ClusterRoleBinding or RoleBinding with the respective pod security policy ClusterRole for all service accounts in the namespace.
The pod security policies allow cluster administrators to create pod isolation policies and assign them to namespaces and worker nodes. IBM Cloud Private provides predefined policies that you can apply to your pod by associating them with a namespace during the namespace creation. These predefined pod security policies apply to most of the IBM content charts.
The following list shows the types and descriptions that range from the most restrictive to the least restrictive:
ibm-restricted-psp: This policy requires pods to run with a non-root user ID, and prevents pods from accessing the host.
ibm-anyuid-psp: This policy allows pods to run with any user ID and group ID, but prevents access to the host.
ibm-anyuid-hostpath-psp: This policy allows pods to run with any user ID and group ID and any volume, including the host path.
 
Attention: This policy allows hostPath volumes. Ensure that this is the level of access that you want to provide.
ibm-anyuid-hostaccess-psp: This policy allows pods to run with any user ID and group ID, any volume, and full access to the host.
Attention: This policy allows full access to the host and network. Ensure that this is the level of access that you want to provide.
ibm-privileged-psp: This policy grants access to all privileged host features and allows a pod to run with any user ID and group ID and any volume.
Attention: This policy is the least restrictive and must be used only for cluster administration. Use with caution.
If you install IBM Cloud Private version 3.1.1, or later as a new installation, the default pod security policy setting is restricted. When it is restricted, the ibm-restricted-psp policy is applied by default to all of the existing and newly created namespaces.You can also create your own pod security policy. For more information see the following link:
In the following you can find some of the recommended practices when using the pod security policy:
Running privileged pods separately from un-privileged pods
Unprivileged pods are those that can run with the ibm-restricted-psp pod security policy. These containers do not require any elevated privileges and are less likely to affect other containers on the same node. You should separate any pods that require special privileges, especially if the workload is not completely trusted or documented.
Binding pod security policies to namespaces instead of service accounts
Kubernetes does not check for elevated role-based access control (RBAC) permissions when a cluster administrator assigns a service account to a pod. It only checks for elevated permissions when a RoleBinding or ClusterRoleBinding is created. If a cluster administrator creates several service accounts in a namespace with various levels of privileges, then the namespace is only as secure as the service account that has the most privileges. It is safer and easier for a cluster administrator to examine the security settings of a namespace when a pod security policy is bound to all of the service accounts rather than the individual ones.
Specifying one pod security policy per namespace
The pod security policies are assigned to pods by the pod admission controller based on the user that creates the pod. For pods created by controllers, such as Deployments, the namespace user is the service account. If more than one policy matches the pod’s declared security context, then any of the policies could match.
Avoid creating pods directly
Pods can be created directly, with the user’s credentials. This can circumvent the pod security policy that is bound to the service accounts in the target namespace. A cluster administrator can run a privileged pod in a namespace that is configured as an un-privileged namespace.
6.4.2 ResourceQuota
This admission controller will observe the incoming requests and ensure that they do not violate any of the constraints enumerated in the ResourceQuota object in a namespace.
6.4.3 LimitRange
This admission controller will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the LimitRange object in a namespace.
6.4.4 AlwaysPullImages
This admission controller modifies every new pod to force the image pull policy to Always. This is useful in a multitenant cluster so that users can be assured that their private images can be used only by those who have the credentials to pull them. Without this admission controller, once an image has been pulled to a node, any pod from any user can use it simply by knowing the image’s name (assuming the pod is scheduled onto the right node), without any authorization check against the image. When this admission controller is enabled, images are always pulled prior to starting containers, which means valid credentials are required.
IBM Cloud Private supports all of the Kubernetes admission controllers. For more details see the following link:
6.5 Image security
IBM Cloud Private component Image Manager runs on top of the Docker registry V2 API. It integrates with the Docker registry to provide a local registry service, The Image Manager uses the cluster’s authentication service to authenticate the end user. The Docker command line client is used to push or pull images in your cluster.
Figure 6-5 on page 249 shows the Image Manager architecture.
Figure 6-5 Image Manager architecture
6.5.1 Pushing and pulling images
In order to push or pull an image we need to log in to our private image registry.
docker login <cluster_CA_domain>:8500
<cluster_CA_domain> is the certificate authority (CA) domain that was set in the config.yaml file during installation. If you did not specify a CA domain name, the default value is mycluster.icp.
You can push or pull the image only if the namespace resource is assigned to a team for which you have the correct role. Administrators and operators can push or pull the image. Editors and viewers can pull images. Unless you specify an imagePullSecret, you can access the image only from the namespace that hosts it.
We can push or pull the image only if the namespace resource is assigned to a team for which you have the correct role. Administrators and operators can push or pull the image. Editors and viewers can pull images. Unless you specify an imagePullSecret, you can access the image only from the namespace that hosts it.
The service account defined in a PodSpec can pull the image from the same namespace under the following conditions:
The PodSpec is using default service account.
The service account is patched with a valid image pull secret.
The PodSpec includes the name of a valid image pull secret.
The image scope is changed to global after the image is pushed.
How to change image scope?
There are two ways to change the image scope:
Change image scope using the command line
To change the scope from namespace to global, run the following command:
kubectl get image <image name> -n=namespace -o yaml
| sed 's/scope: namespace/scope: global/g' | kubectl replace -f -
To change the scope from global to namespace, run the following command:
kubectl get image <image name> -n=namespace -o yaml
| sed 's/scope: global/scope: namespace/g' | kubectl replace -f -
where <image name> value is the name of the image for which scope is changed.
Change the image scope using IBM Cloud Private UI
Perform the following steps to change the image scope using IBM Cloud Private UI:
1. From the navigation menu, click Container Images.
2. For the image that you want to update, click Open and close the List of options button and select Change Scope.
3. Select the scope from the drop-down menu in the Image dialog box.
4. Click Change Image Scope.
6.5.2 Enforcing container image security
Using the container image security enforcement feature, IBM Cloud Private can verify the integrity of a container image before it is deployed to an IBM Cloud Private cluster. For each image in a repository, an image policy scope of either the cluster or the namespace is applied. When you deploy an application, IBM container image security enforcement checks whether the namespace that you are deploying to has any policy regulations that must be applied.
If a namespace policy does not exist, then the cluster policy is applied. If the namespace policy and the cluster policy overlap, the cluster scope is ignored. If neither a cluster or namespace scope policy exists, your deployment fails to start.
Pods that are deployed to namespaces, which are reserved for the IBM Cloud Private services, bypass the container image security check.
The following namespaces are reserved for the IBM Cloud Private services:
kube-system
cert-manager
istio-system
Example 6-12 shows a sample image policy.
Example 6-12 Sample image policy.
apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: <ClusterImagePolicy_or_ImagePolicy>
metadata:
name: <crd_name>
spec:
repositories:
- name: <repository_name>
policy:
va:
enabled: <true_or_false>
In this example, repository_name specifies the name of repository from which the image will be to pulled from. A wildcard (*) character is allowed in the repository name. This wildcard (*) character denotes that the images from all of the repositories are allowed or trusted. To set all your repositories to trusted, set the repository name to (*) and omit the policy subsections.
Repositories by default require a policy check, with the exception of the default mycluster.icp:8500 repository. An empty or blank repository name value blocks deployment of all the images.
When va is set to enabled: true (See Example 6-12 on page 250), vulnerability advisor policy is enforced. It works only for the default IBM Cloud Private built-in container registry. With the other image registries this option should be false, otherwise the image will not be pulled.
ClusterImagePolicy or ImagePolicy for a namespace can be viewed or edited like any other Kubernetes object.
Default image security enforcement
Since the release of IBM Cloud Private Version 3.1.1, IBM container Image Security Enforcement is turned on by default. Any image that does not meet the policy will not be deployed successfully.
Let us check this out with an example: Hello-world image enforcement with kubectl. We run the sample hello-world docker image, as shown in Example 6-13.
Example 6-13 Hello-world docker image
kubectl run -it --rm imagepolicy --image=hello-world --restart=Never
 
Error from server (InternalError): Internal error occurred: admission webhook "trust.hooks.securityenforcement.admission.cloud.ibm.com" denied the request:
Deny "docker.io/hello-world", no matching repositories in ClusterImagePolicy and no ImagePolicies in the "default" namespace
The security enforcement hook blocks the running of the image. This is a great security feature enhancement that prevents any unwanted images to be run in the IBM Cloud Private cluster.
Now we will create a whitelist to enable this specific docker image from the Docker Hub. For each repository that you want to enable the pulling of an image, you have to define the name for the repository, where the wildcard(*) is allowed. You can also define the policy of the vulnerability advisor (VA). If you enable the VA enforcement as true, then only those images that have passed the vulnerability scanning can be pulled. Otherwise, they will be denied.
Create the following image policy YAML as shown in Example 6-14.
Example 6-14 Image policy yaml
apiVersion: securityenforcement.admission.cloud.ibm.com/v1beta1
kind: ImagePolicy
metadata:
name: my-cluster-images-whitelist
namespace: default
spec:
repositories:
- name: docker.io/hello-world
policy:
va:
enabled: false
Here we create an image policy which applies to the default namespace. We give the exact match of the image and set the VA policy disabled. Save it as image-policy.yaml, and apply it with the kubectl apply -f image-policy.yaml command.
Now run the same command again. You will see the docker’s hello world message as shown in Example 6-15.
Example 6-15 Hello world message displayed
kubectl run -it --rm imagepolicy --image=hello-world --restart=Never
 
Hello from Docker!
This message shows that your installation appears to be working correctly.
 
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
 
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
 
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
 
For more examples and ideas, visit:
https://docs.docker.com/get-started/
 
pod "imagepolicy" deleted
Perform the following steps for the hello-world image enforcement example with dashboard:
1. Login to dashboard.
2. Go to Manage Resource Security → Image Policies.
3. Select the my-cluster-images-whitelist that was created with the kubectl command.
4. Remove the policy as shown in Figure 6-6 on page 253.
Figure 6-6 Remove the image policy
5. Now the hello-world image will be prevented to run again.
6. Click the Create Image Policy button at the up right corner to pop up in Figure 6-7.
Figure 6-7 Add image policy
7. Give it a name such as "my-white-list".
8. Specify the scope as Cluster, which applies the enforcement to the whole cluster. Unlike the above namespace based example, once this policy is created, then any namespace can run the image at the cluster level.
9. Set the VA scan to “not enforced”. Currently only the default IBM Private Cloud registry has the VA scanning feature. The rest of the docker registry doesn’t have this feature. If you enabled it, none of the images can be run.
10. Click the Add button. By adding this, you can run the image at any namespace. As an example see Example 6-16.
Example 6-16 Run the hello-world in kube-public namespace
kubectl run -it --namespace=kube-public --rm imagepolicy --image=hello-world --restart=Never
Example 6-17 shows the list of the images that are allowed at the cluster level by default in IBM Cloud Private 3.1.2.
Example 6-17 Default enabled image list in IBM Cloud Private 3.1.2
<your icp cluster name>:8500/*
registry.bluemix.net/ibm/*
cp.icr.io/cp/*
docker.io/apache/couchdb*
docker.io/ppc64le/*
docker.io/amd64/busybox*
docker.io/vault:*
docker.io/consul:*
docker.io/python:*
 
docker.io/centos:*
docker.io/postgres:*
docker.io/hybridcloudibm/*
docker.io/ibmcom/*
docker.io/db2eventstore/*
docker.io/icpdashdb/*
docker.io/store/ibmcorp/*
docker.io/alpine*
docker.io/busybox*
docker.io/dduportal/bats:*
docker.io/cassandra:*
docker.io/haproxy:*
docker.io/hazelcast/hazelcast:*
docker.io/library/busybox:*
docker.io/minio/mc:*
docker.io/minio/minio:*
docker.io/nginx:*
docker.io/open-liberty:*
docker.io/openwhisk/*
docker.io/rabbitmq:*
docker.io/radial/busyboxplus:*
docker.io/ubuntu*
docker.io/websphere-liberty:*
docker.io/wurstmeister/kafka:*
docker.io/zookeeper:*
docker.io/ibmcloudcontainers/strongswan:*
docker.io/opsh2oai/dai-ppc64le:*
docker.io/redis*
docker.io/f5networks/k8s-bigip-ctlr:*
docker.io/rook/rook:*
docker.io/rook/ceph:*
docker.io/couchdb:*
docker.elastic.co/beats/filebeat:*
docker.io/prom/statsd-exporter:*
docker.elastic.co/elasticsearch/elasticsearch:*
docker.elastic.co/kibana/kibana:*
docker.elastic.co/logstash/logstash:*
quay.io/k8scsi/csi-attacher:*
quay.io/k8scsi/driver-registrar:*
quay.io/k8scsi/nfsplugin:*
quay.io/kubernetes-multicluster/federation-v2:*
k8s.gcr.io/hyperkube:*
registry.bluemix.net/armada-master/ibm-worker-recovery:*
For any image that is not in the default list, you have to create either a namespace based image policy or a cluster level policy to allow the image to run.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.54.168