Service-level authorization for databases

To protect TCP connection-based services such as databases, only a legitimate service should be able to connect.

In this section, we will create a new ratings-v2 version and connect it to a MongoDB database service. Our aim is for only the ratings-v2 service to be able to access the MongoDB database:

  1. Review 19-create-sa-ratings-v2.yaml. Notice the bookinfo-ratings-v2 service account, which we will use to create a ratings-v2 deployment that will use MongoDB:
# Script : 19-create-sa-ratings-v2.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-ratings-v2
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v2
...
version: v2
spec:
serviceAccountName: bookinfo-ratings-v2
containers:
- name: ratings
image: istio/examples-bookinfo-ratings-v2:1.10.0
imagePullPolicy: IfNotPresent
env:
# ratings-v2 will use mongodb as the default db backend.
- name: MONGO_DB_URL
value: mongodb://mongodb:27017/test
ports:
- containerPort: 9080
...
  1. Create a service account called bookinfo-ratings-v2 and a ratings-v2 deployment:
$ kubectl -n istio-lab apply -f 19-create-sa-ratings-v2.yaml 
serviceaccount/bookinfo-ratings-v2 created
deployment.extensions/ratings-v2 created
  1. Next, we need to define a destination rule for the ratings service so that we can use v2. We created the destination rule while enabling mTLS for services. Verify it using the following command:
$  kubectl -n istio-lab get dr ratings -o yaml | grep -A6 subsets:
---
subsets:
- labels:
version: v1
name: v1
- labels:
version: v2
name: v2
  1. The ratings virtual service is tagged to subset v1. Let's check this:
$ kubectl -n istio-lab get vs ratings -o yaml | grep -B1 subset:   
host: ratings
subset: v1
  1. To route traffic to version v2 of the ratings service, we will update (patch) the existing ratings virtual service so that it uses subset v2 of the ratings service:
$ kubectl -n istio-lab patch vs ratings --type json -p '[{"op":"replace","path":"/spec/http/0/route/0/destination/subset","value": "v2"}]'
virtualservice.networking.istio.io/ratings patched
  1. Confirm this was set properly. With this, the ratings service will direct its traffic to the ratings-v2 microservice:
$ kubectl -n istio-lab get vs ratings -o yaml | grep -B1 subset:
host: ratings
subset: v2

  1. The ratings-v2 microservice calls MongoDB. Define mongodb service and deploy it for MongoDB: 
# Script : 20-deploy-mongodb-service.yaml

apiVersion: v1
kind: Service
metadata:
name: mongodb
labels:
app: mongodb
spec:
ports:
- port: 27017
name: mongo
selector:
app: mongodb
...
  1. The following is the deployment definition for MongoDB:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodb-v1
spec:
replicas: 1
template:
metadata:
labels:
app: mongodb
version: v1
spec:
containers:
- name: mongodb
image: istio/examples-bookinfo-mongodb:1.10.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
...
  1. Create a mongodb service and a mongodb-v1 deployment:
$ kubectl -n istio-lab apply -f 20-deploy-mongodb-service.yaml 
service/mongodb created
deployment.extensions/mongodb-v1 created
  1. Wait for the mongodb pods to be ready and check them:
$ kubectl -n istio-lab get pods -l app=mongodb
NAME READY STATUS RESTARTS AGE
mongodb-v1-787688669c-lqcbq 2/2 Running 0 45s

Run https://bookinfo.istio.io/productpage. Note that the Ratings service is currently unavailable. This is expected since we pointed the ratings virtual service to v2, which we haven't defined ServiceRole (permission) and ServiceRoleBinding (grant) for yet.

  1. Define ServiceRole for MongoDB:
# Script : 21-create-service-role-mongodb.yaml

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: mongodb-viewer
spec:
rules:
- services: ["mongodb.istio-lab.svc.cluster.local"]
constraints:
- key: "destination.port"
values: ["27017"]

Note that the permission is created through the ServiceRole primitive, which is for the mongodb service so that it allows a connection to port 27017. This is an example of a firewall rule being defined at the service level.

  1. Create ServiceRole for MongoDB:
$ kubectl -n istio-lab apply -f 21-create-service-role-mongodb.yaml 
servicerole.rbac.istio.io/mongodb-viewer created
  1. Define ServiceRoleBinding to authorize the bookinfo-ratings-v2 service account so that it can use the rule (permission) we defined through ServiceRole mongodb-viewer:
# Script : 22-create-service-role-binding-mongodb.yaml 

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: bind-mongodb-viewer
spec:
subjects:
- user: "cluster.local/ns/istio-lab/sa/bookinfo-ratings-v2"
roleRef:
kind: ServiceRole
name: "mongodb-viewer"
  1. Create ServiceRoleBinding bind-mongodb-viewer:
$ kubectl -n istio-lab apply -f 22-create-service-role-binding-mongodb.yaml 
servicerolebinding.rbac.istio.io/bind-mongodb-viewer created

Wait for a few seconds and refresh https://bookinfo.istio.io. The rating service should be available now. Unfortunately, it isn't, and the ratings service is still showing up as currently unavailable.  Let's debug this.

First, let's check whether we have any conflicts in our destination rules between the ratings pod and the mongodb service:

  1. Find out the ratings v2 pod name:
$ export RATINGS_POD=$(kubectl -n istio-lab get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}') ; echo $RATINGS_POD
ratings-v1-79b6d99979-k2j7t
  1. Check for mTLS conflicts between the ratings-v2 pod and the mongodb service. You may either see a CONFLICT status or an output stating Error: Nothing to output:
$ istioctl authn tls-check $RATINGS_POD.istio-lab mongodb.istio-lab.svc.cluster.local
HOST:PORT STATUS SERVER ---
mongodb.istio-lab.svc.cluster.local:27017 CONFLICT mTLS ---

--- CLIENT AUTHN POLICY DESTINATION RULE
--- HTTP default/istio-lab mongodb/istio-lab

OR

Error: nothing to output

Notice that there is a conflict between the ratings-v2 pod and the mongodb service. This is due to the fact that we didn't create a destination rule for the mongodb mTLS traffic, which will enforce mutual TLS for the client (ratings:v2).

  1. Define DestinationRule for the MongoDB service:
# Script : 23-create-mongodb-destination-rule.yaml

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mongodb
spec:
host: mongodb.istio-lab.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
  1. Create DestinationRule and wait for a few seconds for the rule to propagate:
$ kubectl -n istio-lab apply -f 23-create-mongodb-destination-rule.yaml
destinationrule.networking.istio.io/mongodb created
  1. Check for any mTLS conflicts:
$ istioctl authn tls-check $RATINGS_POD.istio-lab mongodb.istio-lab.svc.cluster.local
HOST:PORT STATUS SERVER CLIENT ---
mongodb.istio-lab.svc.cluster.local:27017 OK mTLS mTLS ---

--- AUTHN POLICY DESTINATION RULE
--- default/istio-lab mongodb/istio-lab

If the status shows OK, try refreshing https://bookinfo.istio.io/productpage. The rating service should work now.

Let's do one more simple test to change ratings in the MongoDB database:

  1. Run the following command to change ratings from 5 to 1 and 4 to 3, respectively:
$ export MONGO_POD=$(kubectl -n istio-lab get pod -l app=mongodb -o jsonpath='{.items..metadata.name}') ; echo $MONGO_POD
mongodb-v1-787688669c-lqcbq

$ cat << EOF | kubectl -n istio-lab exec -i -c mongodb $MONGO_POD -- mongo

use test
db.ratings.find().pretty()
db.ratings.update({"rating": 5},{$set:{"rating":1}})
db.ratings.update({"rating": 4},{$set:{"rating":3}})
db.ratings.find().pretty()
exit
EOF

MongoDB shell version v4.0.6
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("22ba0a3d-d2d4-480e-bac5-359d74912beb") }
MongoDB server version: 4.0.6
switched to db test
{ "_id" : ObjectId("5d42d77d07ec5966640aea1b"), "rating" : 4 }
{ "_id" : ObjectId("5d42d77d07ec5966640aea1c"), "rating" : 5 }
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
{ "_id" : ObjectId("5d42d77d07ec5966640aea1b"), "rating" : 3 }
{ "_id" : ObjectId("5d42d77d07ec5966640aea1c"), "rating" : 1 }
bye

Refresh the page to see the ratings change from 4 to 3 and 5 to 1.

We need to create ServiceRole and ServiceRoleBinding for the httpbin service so that we can use the same service in later chapters.

  1. Run the 24-create-service-role-binding-httpbin.yaml script:
$ kubectl -n istio-lab apply -f 24-create-service-role-binding-httpbin.yaml 
servicerole.rbac.istio.io/httpbin created
servicerolebinding.rbac.istio.io/bind-httpbin created

  1. Delete role-based access control for the next chapter and patch the ratings service so that it goes back to v1:
$ kubectl -n istio-lab delete -f 09-create-clusterrbac-config.yaml

$ kubectl -n istio-lab patch vs ratings --type json -p '[{"op":"replace","path":"/spec/http/0/route/0/destination/subset","value": "v1"}]'

This concludes security implementation in Istio. Istio is dynamic, and new security capabilities are being continuously added to allow integration with various services. We haven't covered all of the advanced capabilities here. Next, we'll mention some of these advanced capabilities. It's recommended that you read up on these to find out more.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.51.3