Kubernetes - Running Application Using Deployment
Let’s learn how to deploy, scale and roll back application in Kubernetes
Pod
A Pod is the basic execution unit of a Kubernetes application. A Pod encapsulates an application’s container, storage resources, a unique network IP, and options that govern how the containwers should run.
You don’t usually create individual Pods directly. Instead, you use a higher level abstraction, call Controllers to create pods.
Common Controllers are
- Deployment
- Job
- StatefulSet
- DaemonSet
Controllers use Pod Template to create the pods.
Pod Templates are pod specifications which are included in other objects, such as Replication Controllers, Jobs, and DaemonSets. Controllers use Pod Templates to make actual pods.
Deployment
A Deployment declares what the desired state of the application should look like. It creates ReplicaSet. Then let the ReplicaSet creates and manages the Pods.
Deployment is best at run stateless applications.
Deployment File
Sample Yaml file myapp-deployment.yaml
1 | apiVersion: apps/v1 |
All Kubernetes yaml files has four elements
- apiVersion - version of Kubernetes API you’re using to create this object
- kind - the kind of object to create
- metadata - data identify the object, such as name, UID, labels and optional namespace
- spec - contains nested fields specific to that object.
You can use the Kubernetes API to read and write Kubernetes resource objects. The definition for a deployment object is here
The above YAML file creates a Deployment with name myapp.
A Deployment includes the numebr of replicas and the Pod Template to used to create Pods.
The Deployment’s selector selects Pods base on labels. Selected pods will be affected by this deployment. This field must match pod template’s labels.
The pods created will have label app: myapp. A pod’s label is very important. Deployment and Service both select the pods based on the pod’s label. A pod’s containers section defines what the container to be used.
Kubernetes API defines the specification for all resources. Deployment’s specification is here.
Run a Deployment Using YAML File
Use kubectl apply -f <resource-name>.yaml
command to create the deployment. kubectl apply
can be used to create any Kubernetes resource. lets create myapp
1 | $ kubectl apply -f myapp-deployment.yaml |
You can also use kubectl apply
to update the deployment.
To check the deployment
1 | $ kubectl get deployments |
For more information on the deployment, use kubectl describe deployment <deployment-name>
command.
To check the pods
1 | $ kubectl get pods |
use kubectl get pods -o wide
to get more details including the internal IP address of each pod, the node where the pod locates. -o option specifies output format. common formats are json, yaml, wide.
1 | $ kubectl get pods -o wide |
To delete the deployment
1 | $ kubectl delete deployment myapp |
Resource Limit
When you specify a Pod, you can optionally specify how much CPU and memory (RAM) each Container needs. It is a good practice to set a resource limit for containers. This prevents container from using too much resource from the cluster.
Fractional CPU is allowed. 100m means “100 milicpu”. 1000m means 1 CPU.
Request container with initial 128MB and 0.25 CPU. And with 1G Memory and 1 CPU Limit.
1 | containers: |
Logs
If the pod only has one container, then use kubectl logs <pod-name>
command to get the container log.
1 | kubectl logs myapp-6d8996bd9f |
If there are multiple containers in the pod, then specify the container name too. Format is kubectl logs <pod-name> <container-name>
.
You can see more logs command examples using kubectl logs -h
command. kubectl cheat sheet also provides some practical examples for kubectl logs command.
Scale Up and Scale Down
use kubectl scale
command to scaling a deployment. kubectl takes the name of the deployment as argument.
1 | $ kubectl scale --replicas=4 deployment.apps/myapp |
Another way is to set the replicas field in yaml file and then use kubectl apply
to update the resource.
Kubectl exec
Kubectl exec
command works like docker exec. You use exec command to execute a command. The most common use case is to allocate a pseudo-tty.
Try not to use kubectl exec
command on production environment. Commands run on pods may disrupt the service.
1 | kubectl exec -it myapp-6d8996bd9f-hzd55 /bin/bash |
Execute command to get memory limit
1 | kubectl exec -it myapp-6d8996bd9f-hzd55 cat /sys/fs/cgroup/memory/memory.limit_in_bytes |
Roll Back
To demonstrate roll back, first update the container to use a different version of nginx image:
1 | kubectl set image deployment.extensions/myapp myapp=nginx:1.16 --record |
Here --record
option records current kubectl command in the resource annotation.
use kubectl rollout history
to see an overview of past activity.
1 | $ kubectl rollout history deployment.extensions/myapp |
We can see after updating container’s image, we have a new revision.
To check a specific revision, add –revision option to the command
1 | $ kubectl rollout history deployment/myapp --revision=1 |
use kubectl rollout undo
to roll back the resource to version 1. --to-revision
set the version to rollback to. Default to 0(last revision)
1 | $ kubectl rollout undo deployment/myapp --to-revision=1 --record |