kubectl is the primary tool used for interacting with Kubernetes.
To access the Kubernetes cluster you need a config file.
Get the config and rename it to
$HOME/.kube/config. After that you should be able to access the cluster with:
$ kubectl get nodes
kubectl commands can generally be applied to all Kubernetes resource types.
Setup tab completion (very useful!)
$ source $(kubectl completion bash) # for bash $ source $(kubectl completion zsh) # for zsh
$ kubectl get <RESOURCE TYPE> $ kubectl get pods $ kubectl get services $ etc ...
$ kubectl describe <RESOURCE TYPE> <RESOURCE NAME> $ kubectl describe pod my-pod
Apply yaml configuration to cluster
$ kubectl apply -f configuration-file.yaml
|Apply is an idempotent action so it can be run multiple times and it will only update resources when the config has changed. It is used to create and update (almost) all resources.|
$ kubectl delete <RESOURCE TYPE> <RESOURCE NAME> $ kubectl delete pod my-pod
Show nodes in Kubernetes cluster
$ kubectl get nodes
Show node resource usage
$ kubectl top nodes
Show pod resource usage
$ kubectl top pods
A namespace separates resources. Identically named resources can exist in different namespaces without issues.
List the available namespaces:
$ kubectl get ns
To separate your resources from everyone elses create a new namespace and set it as the default.
$ kubectl create ns <your-namespace> $ kubectl config set-context --current --namespace=<your-namespace>
Create a directory called for example
exercies-1 and create all files in this section in that directory.
You will deploy a simple echo server that responds to HTTP requests. Create the following configuration file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 kind: Deployment apiVersion: apps/v1 metadata: name: echo-server-deployment spec: (1) replicas: 1 selector: (2) matchLabels: app: echo-server template: metadata: labels: (3) app: echo-server spec: (4) containers: - name: server image: registry.gitlab.com/r0bobo/kubernetes-workshop/echo-server-arm ports: - containerPort: 80 name: http-server
|2||Metadata labels used to select which pod(s) should be part of the deployment|
|3||Metadata labels added to the pod(s)|
|Internally Kubernetes uses labels, and not names, to select pods.|
Apply this configuration by running:
$ kubectl apply -f deployment.yaml
To see if the deployment succeeded run:
$ kubectl get deployments
And to see the actual pods that the deployment created:
$ kubectl get pods
deployment is a controller that makes sure that the specified pod(s) are always up.
Delete the created pod with:
$ kubectl delete pod <POD NAME>
And then see how it is recreated by the deployment.
Kubernetes does not expose services outside the cluster by default. The services that should be exposed have to be done so so explicitly. This can be done in different ways but the simplest way is to create a loadbalancer service.
|Loadbalancer services requires that the Kubernetes cluster has a Loadbalancer controller installed. Metallb is installed and used in this cluster.|
A loadbalancer exposes the service on an externally reachable ip address. To create a loadbalancer create this resource:
1 2 3 4 5 6 7 8 9 10 11 12 kind: Service apiVersion: v1 metadata: name: echo-server-lb spec: selector: app: echo-server (1) ports: - protocol: TCP port: 80 (2) targetPort: http-server (3) type: LoadBalancer
|1||Must match the label of the pod(s) that should be exposed
(in this case
|2||Port that will be exposed externally|
|3||Pod port that will be mapped to the external port (can be port name described in pod spec or port number)|
Apply the configuration and get the external address by running:
$ kubectl get services
EXTERNAL-IP shows the external ip of the service.
Try connecting to the echo server. For example: $ curl <EXTERNAL-IP> Hello from echo-server-74c56c894c-fpfxt!
The service should also be reachable on the same ip in a browser.
A deployment can run multiple instances of the same pod.
Change the number of
replicas in the pod spec in
deployment.yaml and apply the config.
List the available pods and see how the number increases.
$ kubectl get pods
The HTTP traffic is now load balanced between the pods. Connect to the server again in a browser or on the command line and see how different pods respond.
$ curl <EXTERNAL-IP> Hello from echo-server-57df7f7569-99md4! $ curl <EXTERNAL-IP> Hello from echo-server-57df7f7569-g6gsj! $ curl <EXTERNAL-IP> Hello from echo-server-57df7f7569-td4fw!
You can also see the access logs on the server side by showing the container logs of the echo-server pods.
$ kubectl logs <POD NAME> [2019-05-27T19:24:19Z INFO actix_web::middleware::logger] 10.42.2.1:36522 "GET /health HTTP/1.1" 200 2 "-" "kube-probe/1.14" 0.000169 [2019-05-27T19:24:22Z INFO actix_web::middleware::logger] 10.42.2.1:36530 "GET /health HTTP/1.1" 200 2 "-" "kube-probe/1.14" 0.000145 [2019-05-27T19:24:25Z INFO actix_web::middleware::logger] 10.42.2.1:36538 "GET /health HTTP/1.1" 200 2 "-" "kube-probe/1.14" 0.000163 [2019-05-27T19:24:28Z INFO actix_web::middleware::logger] 10.42.2.1:36548 "GET /health HTTP/1.1" 200 2 "-" "kube-probe/1.14" 0.000194
To show on which node each pod is running:
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE echo-server-57df7f7569-8m99q 1/1 Running 0 8m20s 10.42.3.203 pi-worker3 echo-server-57df7f7569-ckg6c 1/1 Running 0 8m20s 10.42.4.216 pi-worker1 echo-server-57df7f7569-tvq4h 1/1 Running 0 8m20s 10.42.2.205 pi-worker2
When there is more than one replica in the deployment image upgrades are rolled out so that one (or more) pods are always available when there is more than one replica.
Change the echo-server container image to
to and watch how new pods are deployed with
kubectl get pods.
An ingress is a more convenient and versatile way to expose pods than a loadbalancer. It is basically a reverse proxy that routes traffic based on web address. A drawback of an ingress is however that it can only handles HTTP traffic.
Remove the loadbalancer from the cluster (created in Expose using loadbalancer) and create the following service and ingress definition:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 kind: Service apiVersion: v1 metadata: name: echo-server-service spec: selector: app: echo-server (1) ports: - protocol: TCP port: 80 targetPort: http-server type: ClusterIP (2) --- kind: Ingress apiVersion: networking.k8s.io/v1beta1 metadata: name: echo-server-ingress spec: rules: - host: <YOUR HOST>.layer10.k8s (3) http: paths: - backend: serviceName: echo-server-service (4) servicePort: 80
|1||Needs to match the pod app label just like in Expose using loadbalancer|
|2||The type ClusterIP only exposes the service within the kubernetes cluster and does not allocate an external ip|
|3||The web address that should be routed to the pod. Can be any number of
subdomains but must end with
|4||The service to route ingress traffic to|
|Make sure your hostname is unique and doesn’t clash with someone elses.|
An ingress routes traffic to services, so that is why both a service (that routes traffic to the pods) and an ingress needs to be defined.
|Ingresses, just like loadbalancer services, require an installed ingress controller in the cluster. In this case the Traefik ingress controller is used.|
Connect to the service with the configured url on the command line or in a browser:
$ curl <YOUR HOST>.layer10.k8s Hello from echo-server-74c56c894c-z5trh!
A deployment can define health checks for an application to make sure that it is restarted whenever the health check fails. This is convenient when applications are not able to restart themselves within the container when they fail.
deployment.yaml to the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 kind: Deployment apiVersion: apps/v1 metadata: name: echo-server-deployment spec: replicas: 3 selector: matchLabels: app: echo-server template: metadata: labels: app: echo-server spec: containers: - name: server image: registry.gitlab.com/r0bobo/kubernetes-workshop/echo-server-arm ports: - containerPort: 80 name: http-server livenessProbe: httpGet: path: /health port: http-server initialDelaySeconds: 3 periodSeconds: 3
Health checks can be either a command run inside the container or a network endpoint.
This check is configured to check
<POD IP>:80/health every 3 sec
and allowing the pod 3 sec before starting the healthchecks when the pod starts.
Whenever the endpoint returns a erranous status code the health check is considered failed.
At the moment the health check returns HTTP 200 on the
The echo-server can however be set to fail the health checks by clicking the
button in the browser or with a HTTP POST request to the
Click the fail button or do a put request to the fail endpoint:
$ curl -X POST <YOUR HOST>.layer10.k8s/fail Server [echo-server-74c56c894c-rvxmh] health set to failed
The container is restarted when the next health check is failed.
$ kubectl get pods
RESTARTS should increase for the pod that was failed.
A pod can also be configured with a readiness probe. Readiness probes stop traffic being routed to a pod until the probe returns an ok status. This is convenient for applications that have a long startup time.
When you are finished remove all resources created in the exercise-directory:
$ kubectl delete -f exercies-1
The echo-server only keeps it’s state as long as it’s pod is running. Whenever a pod is destroyed it’s filesystem is as well.
To demonstrate how to persist state in Kubernetes we will deploy Minio, a self-hostable S3-compatible object storage server.
Create the manifests for this exercise in its own directory for easy cleanup when you are finished again.
Create the following deployment, service and ingress:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 kind: Deployment apiVersion: apps/v1 metadata: name: minio-deployment spec: selector: matchLabels: app: minio template: metadata: labels: app: minio spec: containers: - name: minio image: registry.gitlab.com/r0bobo/kubernetes-workshop/minio-arm env: - name: MINIO_ACCESS_KEY value: <LOGIN USERNAME> (1) - name: MINIO_SECRET_KEY value: <LOGIN PASSWORD> (2) ports: - containerPort: 9000 name: http-server livenessProbe: httpGet: path: /minio/health/live port: http-server initialDelaySeconds: 3 periodSeconds: 3 readinessProbe: (3) httpGet: path: /minio/health/ready port: http-server periodSeconds: 3
|1||Choose a Minio login name to setup|
|2||Choose a Minio login password to setup|
|3||Minio has a readiness endpoint|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 kind: Service apiVersion: v1 metadata: name: minio-service spec: selector: app: minio ports: - protocol: TCP port: 80 targetPort: http-server --- kind: Ingress apiVersion: networking.k8s.io/v1beta1 metadata: name: minio-ingress spec: rules: - host: <YOUR SUBDOMAIN>.layer10.k8s http: paths: - backend: serviceName: minio-service servicePort: 80
Apply it to the cluster and wait for the pod to start.
When the pod has started (check status with
kubectl get pods):
Go to the ingress url in a browser and login with the configured access key and secret key.
Create a bucket and add a file. If you want to get fancy you can also use any S3-api compatible tool to upload the file.
Delete the pod (to simulate the pod being unscheduled or fail for some reason):
$ kubectl get pods -l app=minio # To find pod name $ kubectl delete pod <MINIO POD NAME>
The pod will be recreated directly by the deployment but when you login to minio the file that was uploaded will be gone.
To persist the Minio state a persistent volume has to be configured and mounted. The most convenient way to do this is by using a PVC (Persistent Volume Claim).
A PVC provisions storage from a storage class that is configured in the cluster. To see the available storage classes run:
$ kubectl get storageclasses NAME PROVISIONER AGE block (default) iscsi-targetd 33h shared fuseim.pri/ifs 17d
The cluster has two storage classes configured.
block is iscsi-backed storage that is safe to use for applications such as databases but only usable by a single pod at a time.
shared is nfs-backed and can be mounted to multiple concurrent pods, but is unsafe to use for databases.
Storage classes are configured by the administrator or the cloud provider. In a cloud provider there is probably a lot of different classes for different prices, speeds, levels of backup etc.
To crate a PVC and mount it to the Minio pod update
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 kind: Deployment apiVersion: apps/v1 metadata: name: minio-deployment spec: strategy: type: Recreate (1) selector: matchLabels: app: minio template: metadata: labels: app: minio spec: volumes: (2) - name: minio-data persistentVolumeClaim: claimName: minio-pvc containers: - name: minio image: registry.gitlab.com/r0bobo/kubernetes-workshop/minio-arm env: - name: MINIO_ACCESS_KEY value: <LOGIN USERNAME> - name: MINIO_SECRET_KEY value: <LOGIN PASSWORD> ports: - containerPort: 9000 name: http-server livenessProbe: httpGet: path: /minio/health/live port: http-server initialDelaySeconds: 3 periodSeconds: 3 readinessProbe: httpGet: path: /minio/health/ready port: http-server periodSeconds: 3 volumeMounts: (3) - mountPath: /data name: minio-data subPath: data - mountPath: /root/.minio name: minio-data subPath: config --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: minio-pvc spec: accessModes: - ReadWriteOnce (4) volumeMode: Filesystem (5) resources: requests: storage: 1.1Gi
|1||Modify pod update rollout so two pods are never running simultaneously to avoid issues with PVC (block) that can only be mounted to one concurrent pod|
|2||Define volumes available to containers in the pod|
|3||Mount volume to container|
|4||Define PVC as readable and writeable by a single pod|
|5||Request PVC as a formatted filesystem (XFS in this case) rather than a raw block device|
After applying the manifest to Kubernetes the PVC status can be checked with:
$ kubectl get pvc
When the pod has started, reupload a file to Minio and delete the pod again. The file should not be deleted when the pod has been recreated now.
|PVCs are one of the few objects that are not fully idempotent. Updating a PVC spec often requires deleting and recreating the PVC.|
In the Minio deployment the
are defined directly in manifest file.
MINIO_SECRET_KEY is especially problematic as the manifests probably should be version controlled and checked into git.
To avoid this Kubernetes secrets can be used instead. Create one with this command:
$ kubectl create secret generic test-secret \ --from-literal="access_key=<LOGIN USERNAME>" \ --from-literal="secret_key=<LOGIN PASSWORD>"
The information can also be defined from a file.
Secrets can be listed and described just like any resource:
$ kubectl get secrets $ kubectl describe secrets minio-secrets
To use this secret in the Minio deployment, update the container spec in
20 21 22 23 24 25 26 27 28 29 30 31 32 33 containers: - name: minio image: registry.gitlab.com/r0bobo/kubernetes-workshop/minio-arm env: - name: MINIO_ACCESS_KEY valueFrom: secretKeyRef: name: minio-secrets key: access_key - name: MINIO_SECRET_KEY valueFrom: secretKeyRef: name: minio-secrets key: secret_key
app.yaml can safely be shared without exposing the secrets.
A secret can be mounted to any number of pods so it is a convenient way to define and modify secrets needed in multiple pods from a single place.
$ kubectl delete -f <EXERCISE 2 DIR>
With the concepts used so far you should be able to deploy a more complex application. The application in question is Gitea, a lightweight Gitlab/Github/etc alternative.
This is the required application configuration:
- Container image
- Data location
SSH traffic (not routable by ingress)
Postgres host (service is convenient here)
Name of the database that Gitea creates
URL that Gitea should allow
URL that Gitea should display and redirect to
- Container image
- Data location
Database client access
Name of database that will be created when Postgres starts
Database user to create in the database
Database password to create in the database
Solution can be found here.