Task: Rolling Updates And Rolling Back Deployments in Kubernetes
Date: 7/14/2024
There is a production deployment planned for next week. The Nautilus DevOps team wants to test the deployment update and rollback on Dev environment first so that they can identify the risks in advance. Below you can find more details about the plan they want to execute.
2. Now upgrade the deployment to version httpd:2.4.43 using a rolling update.
3. Finally, once all pods are updated undo the recent update and roll back to the previous/original version.
- Create a namespace named as httpd-namespace-datacenter.
- Create a deployment named as httpd-deployment-datacenter under newly created namespace. For the deployment use httpd image with latest tag only and remember to mention the tag i.e httpd:latest, and make sure replica counts are 2.
- Create a service named as httpd-service-datacenter under same namespace to expose the deployment, nodePort should be 30004.
- Create a Deployment named as ic-deploy-datacenter.
- Configure spec as replicas should be 1, labels app should be ic-datacenter, template's metadata lables app should be the same ic-datacenter.
- The initContainers should be named as ic-msg-datacenter, use image fedora, preferably with latest tag and use command '/bin/bash', '-c' and 'echo Init Done - Welcome to xFusionCorp Industries > /ic/ecommerce'. The volume mount should be named as ic-volume-datacenter and mount path should be /ic.
- Main container should be named as ic-main-datacenter, use image fedora, preferably with latest tag and use command '/bin/bash', '-c' and 'while true; do cat /ic/ecommerce; sleep 5; done'. The volume mount should be named as ic-volume-datacenter and mount path should be /ic.
- Volume to be named as ic-volume-datacenter and it should be an emptyDir type.
Date: 18 Apr2024
Task: Persistent Volumes in Kubernetes
The Nautilus DevOps team is working on a Kubernetes template to deploy a web application on the cluster. There are some requirements to create/use persistent volumes to store the application code, and the template needs to be designed accordingly. Please find more details below:
- Create a PersistentVolume named as pv-nautilus. Configure the spec as storage class should be manual, set capacity to 3Gi, set access mode to ReadWriteOnce, volume type should be hostPath and set path to /mnt/finance (this directory is already created, you might not be able to access it directly, so you need not to worry about it).
- Create a PersistentVolumeClaim named as pvc-nautilus. Configure the spec as storage class should be manual, request 3Gi of the storage, set access mode to ReadWriteOnce.
- Create a pod named as pod-nautilus, mount the persistent volume you created with claim name pvc-nautilus at document root of the web server, the container within the pod should be named as container-nautilus using image httpd with latest tag only (remember to mention the tag i.e httpd:latest).
- Create a node port type service named web-nautilus using node port 30008 to expose the web server running within the pod.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.
Solution:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nautilus
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
storageClassName: manual
hostPath:
path: /mnt/finance
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nautilus
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: pod-nautilus
labels:
app: httpd
spec:
volumes:
- name: storage-nautilus
persistentVolumeClaim:
claimName: pvc-nautilus
containers:
- name: container-nautilus
image: httpd:latest
ports:
- containerPort: 80
volumeMounts:
- name: storage-nautilus
mountPath: /usr/loacl/apache2/htdocs
---
apiVersion: v1
kind: Service
metadata:
name: web-nautilus
spec:
type: NodePort
selector:
app: httpd
ports:
- port: 80
targetPort: 80
nodePort: 30008
thor@jumphost ~$
thor@jumphost ~$ kubectl apply -f pod-nautilus.yaml
persistentvolume/pv-nautilus created
persistentvolumeclaim/pvc-nautilus created
pod/pod-nautilus created
service/web-nautilus created
thor@jumphost ~$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/pod-nautilus 1/1 Running 0 18s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21m
service/web-nautilus NodePort 10.96.60.136 <none> 80:30008/TCP 18s
thor@jumphost ~$
Date: 19 Apr2024
Task: Manage Secrets in Kubernetes
The Nautilus DevOps team is working to deploy some tools in Kubernetes cluster. Some of the tools are licence based so that licence information needs to be stored securely within Kubernetes cluster. Therefore, the team wants to utilize Kubernetes secrets to store those secrets. Below you can find more details about the requirements:
- We already have a secret key file media.txt under /opt location on jump host. Create a generic secret named media, it should contain the password/license-number present in media.txt file.
- Also create a pod named secret-nautilus.
- Configure pod's spec as container name should be secret-container-nautilus, image should be ubuntu preferably with latest tag (remember to mention the tag with image). Use sleep command for container so that it remains in running state. Consume the created secret and mount it under /opt/cluster within the container.
- To verify you can exec into the container secret-container-nautilus, to check the secret key under the mounted path /opt/cluster. Before hitting the Check button please make sure pod/pods are in running state, also validation can take some time to complete so keep patience.
Date: 21 Apr2024
Task: Environment Variables in Kubernetes
Create a pod named envars.
Container name should be fieldref-container, use image redis preferable latest tag, use command 'sh', '-c' and args should be
'while true; do
echo -en '/n';
printenv NODE_NAME POD_NAME;
printenv POD_IP POD_SERVICE_ACCOUNT;
sleep 10;
done;'
(Note: please take care of indentations)
Define Four environment variables as mentioned below:
a.) The first env should be named as NODE_NAME, set valueFrom fieldref and fieldPath should be spec.nodeName.
b.) The second env should be named as POD_NAME, set valueFrom fieldref and fieldPath should be metadata.name.
c.) The third env should be named as POD_IP, set valueFrom fieldref and fieldPath should be status.podIP.
d.) The fourth env should be named as POD_SERVICE_ACCOUNT, set valueFrom fieldref and fieldPath shoulbe be spec.serviceAccountName.
Set restart policy to Never.
To check the output, exec into the pod and use printenv command.
Note: The kubectl utility on jump_host has been configured to work with the kubernetes cluster.
Solution:
thor@jumphost ~$ cat envars.yaml
apiVersion: v1
kind: Pod
metadata:
name: envars
spec:
restartPolicy: Never
containers:
- name: fieldref-container
image: redis:latest
command: ['sh', '-c']
args:
- |
while true; do
echo -en '\n';
printenv NODE_NAME POD_NAME;
printenv POD_IP POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
thor@jumphost ~$
thor@jumphost ~$ kubectl apply -f envars.yaml
thor@jumphost ~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
envars 1/1 Running 0 35s
thor@jumphost ~$ kubectl -it exec envars -- bash
root@envars:/data# printenv NODE_NAME POD_NAME POD_IP POD_SERVICE_ACCOUNT
kodekloud-control-plane
envars
10.244.0.5
default
root@envars:/data#
No comments:
Post a Comment