Arvutiteaduse instituut
  1. Kursused
  2. 2021/22 kevad
  3. Pilvetehnoloogia (LTAT.06.008)
EN
Logi sisse

Pilvetehnoloogia 2021/22 kevad

  • Main
  • Lectures
  • Practicals
    • Plagiarism Policy
  • Submit Homework

Practice 12 - Working with Kubernetes using Rancher

Kubernetes, also known as K8s, is an open-source system for automating the deployment, scaling, and management of containerized applications. In this practice session, you will explore to set up Kubernetes cluster using Rancher. Rancher, is the open-source multi-cluster orchestration platform, lets operations teams deploy, manage and secure enterprise Kubernetes. Further, you will learn to create, deploy and scale the containers in a pod and also manage the message board application.

References

  • Kubernetes documentation: https://kubernetes.io/docs/home/
  • Rancher 2.X documentation guide : https://rancher.com/docs/rancher/v2.6/en/overview/architecture/

If you have problems, check:

  • Pinned messages in #lab12-kubernetes Slack channel.
  • Ask if any questions in #lab12-kubernetes Slack's channel.

Exercise 12.1. Setting up of Rancher-based Kubernetes Cluster!

In this task, you're going to set up two Rancher Kubernetes cluster as shown in the below diagram. Further, you will configure kubectl to interact with the Kubernetes cluster.

  • Create two VMs with the names Lab12_LASTNAME_Rancher_Master and Lab12_LASTNAME_Worker
    • Source: Use Volume Snapshot, choose ubuntu20SSDdocker
      • in this Ubuntu-based snapshot, the installation of Docker (as we did in Lab 2) has already been done for us.
      • Enable "Delete Volume on Instance Delete"
    • Flavour: m2.tiny
    • Security Groups: default and lab12
  • Connect to Rancher_Master VM, In this VM we will configure a rancher server used to set up and coordinate the Kubernetes cluster, and also we will make this VM as Kubernetes Master.
    • Create a Rancher Server using the command sudo docker run -d --restart=unless-stopped --name rancher -p 80:80 -p 443:443 rancher/rancher:v2.4.15
      • Now you can access the Rancher Web Interface at http://Lab12_LASTNAME_Rancher_Master:80.
      • Setup the login password and after logging in, the User Interface should like as shown below.
  • We will create a cluster with One Master and Worker.
    • Click on Add Cluster --> From Existing Nodes (Custom) and click Next
    • Add Cluster Name: Freely choose and Leave other values as default, then Click Next
    • In Customize Node Run Command ---> Node Options --> Node Role, Select etcd, Control Plane Worker. This is required for adding a master.
    • Copy the associated command from your Rancher instance, it looks something like:
    sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.15 --server https://172.xx.xx.xx --token xsfs9sfnthsg2tk9q7jdv2zlmvgxjhnmfpl5p9ssmxp56wmmf2j775 --ca-checksum 91070f2fa8faf1302aa35e4d4e79fa67d3d274f665ac2474e5daad2475f874e4 --etcd --controlplane --worker
    • Paste and run this command in Lab12_LASTNAME_Rancher_Master shell
    • Click on Done and wait for few minutes, you should see the node added to the cluster.
  • Similarly add the worker:
    • Click on the options menu in the cluster and select Edit like shown below
  • In Customize Node Run Command ---> Node Options --> Node Role, Select Worker. Now copy and run the command in the Worker node.
  • Finally, you should see the two nodes in the cluster. Click on Nodes web interface and should see the two nodes as shown below.
  • Deliverable: Take screen shot as above (Your IP should be visible).
  • You can also check the cluster dashboard by clicking on Try Dashboard, here you can look in to the Nodes, Pod, Service, Workload etc,.

Setting up of Kubectl

The Kubectl is a Kubernetes command-line tool that allows you to run commands against the Kubernetes cluster. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. You can use kubectl provided by the Rancher by clicking on your cluster and then click on Launch kubectl.

For easy handling, we would set up the kubectl on your laptop machine and perform the operations as mentioned below based on the OS.

  • Windows Users
    • Open cmd prompt, download the kubectl.exe curl -LO "https://dl.k8s.io/release/v1.23.0/bin/windows/amd64/kubectl.exe"
    • Append or prepend the kubectl binary folder to your PATH environment variable.
    • Test to ensure the version of kubectl is the same as downloaded kubectl version --client
    • For more information: https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/
  • Linux Users
    • In your laptop terminal, download the kubectl curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    • Download checksum file curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
    • Verify the checksum echo "$(cat kubectl.sha256) kubectl" | sha256sum --check, after this should display kubectl: OK
    • Install the kubectl sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    • Test the working kubectl kubectl version --client
    • For more information: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

Configuring Kubectl to access Rancher Kubernetes Cluster

  • Go to your cluster in Rancher Web Interface (Select it in the top-left dropdown menu) and click on Kubeconfig File in the right side of the window, here you will be displayed with the config file. Copy the file content.
  • Create a directory .kube in your OS users home directory and create a file config and paste the content of the config file just copied in the earlier step.
  • Now test the working of the kubectl with the rancher cluster kubectl cluster-info. You should see the output as Kubernetes control plane is running at https://<VM_IP>/k8s/clusters/c-tlbbp....
  • Deliverable: Take screenshot of your kubectl displayed with the output as above.

Exercise 12.2. Working with Pods!

Pods are the smallest, most basic deployable objects in Kubernetes. They can hold one or a group of containers such as Docker containers which share the storage, network, and other resources. In this task, you will work with kubectl to create pods and deployment the services.

Kubernetes objects are persistent entities in the Kubernetes system. To interact with the Kubernetes API of the cluster, kubectl uses the .yaml, where Kubernetes objects are specified in the .yaml file. You can have a look into the writing YAML for Kubernetes Guide

Task 12.2.1: Running a container in a Pod
  • We will write the YAML file to create the first pod (Which runs with a single busy box container). Create the YAML file first_pod.yaml
    • The First statement starts with apiVersion, this indicates the version of the Kubernetes API you're using to create this object. Add apiVersion: v1
    • Mention what kind of object your creating for example Pod, Deployment. Add kind: Pod
    • Next, mention the metadata. It's the Data that helps uniquely identify the object, including a name string, UID, and optional namespace. Add
metadata:
 name: first
 labels:
  app: myfirst
  • Finally, mention the specification of the object as like below
spec:
 containers:
 - name: myapp-container
   image: busybox
   command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 180']
  • The final YAML file looks like Attach:first_pod.txt
  • Create the pod kubectl create -f first_pod.yaml
  • You can check the created pod in Rancher Web Interface Try Dashboard-->Starred-->Pod
  • Check in terminal kubectl get pods or kubectl get pods -o wide (Check the difference between these two commands)
  • Get the logs of the pod kubectl logs first_pod
  • Get the description of the pod kubectl describe pod first
    • Deliverable: Take screen shot of the out put of the above command.
  • SSH to the pod using kubectl exec -it first -- /bin/sh
    • type the command cat /etc/resolv.conf inside the pod shell
  • Delete the pod kubectl delete -f first_pod.yaml
Task 12.2.2: Scaling the Pods with ReplicaSets

ReplicaSets are Kubernetes controllers that are used to maintain the number and running state of pods. This is mainly to keep a predefined number of pods running. If more pods are up, the additional ones are terminated. Similarly, if one or more pods failed, new pods are activated until the desired count is reached. The ReplicaSet uses labels to match the pods that it will manage.

  • Creating ReplicaSet
    • Create a file with the name replicaset.yml and it contains:
    • The ReplicaSet definition file will be like this Attach:replicaset.txt
      • The apiVersion should be used like v1
      • The kind of this object is ReplicaSet
      • In metadata, we define the name by which we can refer to this ReplicaSet. further the number of labels by which can be referred.
      • The spec part is mandatory to mention
        • The number of replicas this controller should maintain.
        • The selection criteria by which the ReplicaSet will choose its pods.
        • The pod template is used to create (or recreate) new pods.
  • Create the ReplicaSet kubectl apply -f replicaset.yml
  • Check the created ReplicaSets kubectl get rs. Here, you see the 4 replicas of the Pod.
  • Check the list of the Pods kubectl get po. Take a note of one pod name and remove the pod from replica set using:
    • Removing a Pod from ReplicaSet kubectl edit pods web-<YOUR_SOME_SEQUENCE>. This command opens an editor with deployment definition of that pod, here change role:web to role:isolated. After removing, pod still will be in running state but not managed by ReplicaSet Controller.
  • Scale up the replicaset kubectl scale --replicas=10 -f replicaset.yml
  • Check the created replicas kubectl get rs
  • Scale down the replicaSet to 1.
  • Check the replica set.
  • Now, Set the autoscale the replicaset to 5 using kubectl. You can refer to this link to set the autoscale property.

Deliverable: Take screenshot of the output of autoscale command and write the answer in text file.

Exercise 12.3. Working Kubernetes deployments

A Deployment resource uses a ReplicaSet to manage the pods. However, it handles updating them in a controlled way. Deployments are used to describe the desired state of Kubernetes. They dictate how Pods are created, deployed, and replicated.

ReplicaSet have one major drawback that once you select the pods that are managed by a ReplicaSet, you cannot change their pod templates. For example, you want to change the image from message-board.v1 to message-board.v2 in the pod template than you need to delete the ReplicaSet and create new replica set. This increases the downtime.

Task 12.3.1: Creating the first deployment

In this task, we are going to work with deployment using kubectl. We will create a deployment for the flask based message board application. We also work with rolling update to change the template image without down time in the application deployment.

  • Create a file with the name first_deployment.yml
    • Provide API version apiVersion: apps/v1
    • Mention the kind, here its kind: Deployment
    • Mention the metadata: object keys
      • Add name: flask-deployment
    • Provide specifications of the deployment as shown below
spec:
  selector:
    matchLabels:
      app: flask
  replicas: 2
  template:
    metadata:
      labels:
        app: flask
    spec:
      containers:
      - name: flask
        image: shivupoojar/message-board.v1
        ports:
        - containerPort: 5000
          hostPort: 5000
  • Deployment will be chosen based on the matchable labels under the selector key during the deployment.
  • Replicas will describe the number of pods that needs to be created.
  • Template, that includes the app name, list of containers, and associated objects.
  • The final deployment file should like Attach:first_deployment.txt.
  • Deploy the definition file using kubectl apply -f first_deployment.yml
  • Check your deployment kubectl get deployments. Should see the flask deployment with 2 replicas.
  • Get the description of the deploymentkubectl describe deploy
  • List the pods kubectl get po. Here, should see two pods.
  • Scale the deployment kubectl scale deployments/flask-deployment --replicas=4 and list the deployments
  • Let us use rolling update to change the Pod's template image kubectl set image deployments/flask-deployment flask=shivupoojar/message-board.v2
  • You can check the change in image name kubectl describe pods
  • Delete the deployment.
Task 12.3.2: Creating deployment for Message Board flask application and Postgresql database.
  • Create a deployment definition file with name message-board.yml
    • Use the deployment definition in first_deployment.yml file and update image value like image: <YOUR_DOCKER_HUB_USERNAME/IMAGE_NAME> with your flask application developed and pushed into the docker hub in the Practice Session 2 in Task 2.3.3.
  • Similarly we will create deployment definition for postgress in message-board.yml as follows:
    • Append the following code modifications to message-board.yml file.
      • Add --- at end of file.
      • You can copy the flask deployment block and modify the object as necessary
        • Change the metadata name key's value, lables and app key values to postgres. Also replicas: 1.
        • In container spec, change the name to postgresql, image to postgres:14.1-alpine, and containerPort to 5432 and remove hostPort key.
  • Just hold on in editing message-board.yml file, in next section we will add the definition for secrets.
Task 12.3.3: Creating Secrets to store the postgres username and password

In this task, we are going to create a secrets in the deployment file to store the posgres credentials.

  • Append the following definition at end of file in message-board.yml.
    • Here, you can choose the password freely. Use the command echo -n '<FREELY_CHOOSE>' | base64 to convert text characters to base64.
---
apiVersion: v1
kind: Secret
metadata:
  name: postgres-secret-config
data:
  # echo -n 'postgres' | base64
  username: cG9zdGdyZXM=
  # echo -n '<FREELY_CHOOSE>' | base64
  password: <FREELY_CHOOSE>
  • Again, your message-board.yml deployment definition is not complete, so hold on with testing!
Task 12.3.4: Updating deployment file to use secrets

Now, let us update the message-board.yml file to set environment variables for Postgres and flask deployment.

  • In message-board.yml file and move on to Postgres deployment block.
  • add the following after Ports object
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-secret-config
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret-config
              key: password
  • Move on to flask deployment block and add the following block after Ports
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-secret-config
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret-config
              key: password
        - name: DATABASE_URL
          value: postgresql://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@postgresql:5432/postgres 
Task 12.3.5: Testing the deployment
  • Now, create the deployment kubectl apply -f massage-board.yml
  • Check the created deploymentskubectl get deployments and secrets kubectl get secrets
  • Deliverable: Take the screenshots of the above command - deployments .
  • Check the list of pods. If you see errors like CrashLooplack and can check using kubectl logs <pod_name> .
    • But need not worry about errors in flask application deployment. This is due to flask application cannot identify the postgres post, since we did not exposed the posgres container to access outside the pod.
  • Check the logs of postgres and flask application.
  • Delete the deployment kubectl delete -f message-board.yml
  • Deployemnt definition is not complete until we need to add service definition (We will add in Exercise 12.5) to expose the flask application to outside world. Further, we need to expose the postgres for accessible outside the pod.

Exercise 12.4. Working with Kubernetes Services

Service enables network access to a set of Pods in Kubernetes. Services select Pods based on their labels. Since flask-based Message-board application is only accessible within the cluster of pods. So Service is an object that makes a collection of Pods with appropriate labels available outside the cluster and provides a network load balancer to distribute the load evenly between the Pods. We will create a service that routes the network traffic to the pods with the label flask.

  • Append to deployment file message-board.yml
    • Add --- at end of file.
    • Add the API Version apiVersion: v1
    • Mention kind: Service
    • Add metadata: with name: flask-service
    • Add specification object spec:
      • Add ports and targetPort
      • Finally selector, the label associated with the pod
  • The final service definition block should look like
---
apiVersion: v1
kind: Service
metadata:
  name: service-flask
spec:
  type: NodePort
  selector:
    app: flask
  ports:
  - protocol: TCP
    port: 5000
    targetPort: 5000
    name: tcp-5000

Similarly, we need to create a service for Postgres deployment, since its accessed by flask application.

  • Append to deployment file message-board.yml
    • Add --- at end of file.
    • Change the metadata object name: postgresql and labels name: postgresql
    • In spec object, change port: 5432 and remove the targetPort
    • In selector, change the app to postgresql
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  labels:
    name: postgresql
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
  • Finally now time test the complete deployment message-board.yml.
    • Create the deployment. After this, services and deployments are created.
    • Check the created podskubectl get po -o wide
    • Check the deployments kubectl get deployments
    • Check for the services kubectl get services
      • Open your message board application at http://MASTER_IP:5000 and enter a few messages.
      • Deliverable: Take the screenshot of the web page showing the message board (Your IP should be visible).
    • If you see the errors in the pods running status then check the logs of the pods.

Scale the flask application deployment

  • Use following commands to scale and check the flask deployment
    • Scale to 2 replicas kubectl scale deployment flask-deployment --replicas=2
    • Check the scaling of the pods kubectl get pods
    • Get the deployment kubectl get deployment
      • Deliverable: Take the screenshot of the output of above commands.

Exercise 12.5. Working with Kubernetes Secrets

Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Its not good practice to mention the definition block for secrets in the deployment file message-board.yml. In this, task we will create kubernetes secrets through kubectl.

  • Remove the secret definition block in message-board.yml.
  • Create secret using kubectl create secret generic postgres-secret-config --from-literal=username=postgres --from-literal=password=<CHOOSE_FREELY_PASSWORD>
  • Check for the created secrets kubectl get secret postgres-secret-config
  • Check the secrets in yml kubectl get secret postgres-secret-config -o yaml
  • Deliverable: Take the screenshot of the above command.
  • Again deploy the message-board.yml deployment definition and test the application is working.

Exercise 12.6. Working with Data Volume in Kubernetes

In this task, your working with data volume management in kubernetis application deployment. In teh previous tasks, the message in the postgres data base will be deleted once you delete the pod. So need make the data persist and accessible to new created pods.

  • In message-board.yml inside spec of containers in postgress deployment definition block
    • Add on more environment variable with name name: PGDATA and with value value: /var/lib/postgresql/data/pgdata
    • You can refer to the example here
    • Add volumeMounts with name to data-storage-volume and mountPath to /var/lib/postgresql/data
    • The finally volume mounts should like in container block
        volumeMounts:
          - name: data-storage-volume
            mountPath: /var/lib/postgresql/data

  • Add the following under spec after container block
      volumes:
        - name: data-storage-volume
          persistentVolumeClaim:
            claimName: postgres-db-claim
  • You also create PersistentVolumeClaim, so create data-volume.yml and add the following
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-db-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  storageClassName: manual
  • Create the data volume kubectl apply -f data-volume.yml
  • Create the deployment
  • Check for the mounts kubectl describe pod <POSTGRES_POD>
  • Deliverable: Take the screen shot of output of this command showing mounts.
  • Open your application in brouser and add few messages
  • Delete the deployment
  • Again create the deployment
  • Open the application in browser and you should the previous messages.

Bonus exercise: Checking the readiness and liveliness of services

The goal of the bonus tasks is to experiment, with how service liveliness and lifecycle checks can be configured in Kubernetes deployments. Liveness and Readiness probes are used to control the health of an application running inside a Pod’s container.

  • Liveness probes: Kubernetes want to know if your app is alive or dead.
  • Readiness probes: Kubernetes want to know when your app is ready to serve traffic.

In this, you need to add definition for readiness and liveness checking fro flask application.

  • Use the message-board.yml
    • Add livenessProbe: and readinessProbe: in containers object inside flask deployment definition block. More information,You can check for documentation here)
      • Set initialDelaySeconds to 2 seconds.
      • Here livenessProbe check method would httpGet and Path should be / and and readiness check method would httpGet and Path should be /?msg=Hello
      • You can use kubectl describe pod <pod_name> to check the life cycle events of the pod. The output should look like
  • Deliverable: Take the screen shot of the Events section from the command output.
  • Increase the initialDelaySeconds to 40 seconds. Answer the question:
    • What happens after increasing the initialDelaySeconds time limit and why liveliness andreadiness errors occurred during the first scenario?
  • You can refer Flask example here: https://github.com/sebinxavi/kubernetes-readiness

Deliverables:

  1. Screenshots from tasks 12.1 - 2
  2. Screenshots from 12.2.1, 12.2.2
  3. Screenshot from 12.3.5
  4. Screenshots from 12.4-2
  5. Screenshot from 12.5
  6. Screenshot from 12.6
  7. All .yml files
12. Lab 12
Sellele ülesandele ei saa enam lahendusi esitada.
  • Arvutiteaduse instituut
  • Loodus- ja täppisteaduste valdkond
  • Tartu Ülikool
Tehniliste probleemide või küsimuste korral kirjuta:

Kursuse sisu ja korralduslike küsimustega pöörduge kursuse korraldajate poole.
Õppematerjalide varalised autoriõigused kuuluvad Tartu Ülikoolile. Õppematerjalide kasutamine on lubatud autoriõiguse seaduses ettenähtud teose vaba kasutamise eesmärkidel ja tingimustel. Õppematerjalide kasutamisel on kasutaja kohustatud viitama õppematerjalide autorile.
Õppematerjalide kasutamine muudel eesmärkidel on lubatud ainult Tartu Ülikooli eelneval kirjalikul nõusolekul.
Courses’i keskkonna kasutustingimused