WARNING: This file contains my condensed notes of things I needed to memorize for the test so it probably does not contain everything you need to learn.

Kubernetes Documentation

https://kubernetes.io/docs/home/

Test Tips

Resources Allowed During Exam

During the exam, candidates may:

Exam Technical Instructions

  1. Root privileges can be obtained by running ‘sudo −i’.
  2. You must NOT reboot the base node (hostname node-1). Rebooting the base node will NOT restart your exam environment.
  3. Do not stop or tamper with the certerminal process as this will END YOUR EXAM SESSION. Do not block incoming ports 8080/tcp, 4505/tcp and 4506/tcp. This includes firewall rules that are found within the distribution’s default firewall configuration files as well as interactive firewall commands.
  4. Use Ctrl+Alt+W instead of Ctrl+W. 5.1 Ctrl+W is a keyboard shortcut that will close the current tab in Google Chrome.
  5. The Terminal (Terminal Emulator Application) is a Linux Terminal; to copy & paste within the Linux Terminal you need to use LINUX shortcuts: Copy = Ctrl+SHIFT+C (inside the terminal) Paste = Ctrl+SHIFT+V (inside the terminal) OR Use the Right Click Context Menu and select Copy or Paste
  6. For security reasons, the INSERT key is prohibited within the Remote Desktop. Candidates can Type i to switch into insert mode so that you can start editing the file. Once you’re done, press the escape key Esc to get out of insert mode and back to command mode.
  7. Installation of services and applications included in this exam may require modification of system security policies to successfully complete.

Environment Setup and kubectl

In the docs search for kubectl cheat sheet and select kubectl Cheat Sheet

Everything in this section is in the Kubectl Cheat Sheet

Kubectl Editor

First, make sure nano is installed, if so . . .
export KUBE_EDITOR=nano

kubectl Autocompletion and k Alias

The CKA and CKAD instructions indicate that k and autocompletion are already installed. Leaving instructions here in case they are not

First thing in the cheat sheet, I wouldn’t bother memorizing this, just know how to get there and what it looks like.

source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
alias k=kubectl
complete -o default -F __start_kubectl k

Aliases

export do="--dry-run=client -o yaml" # k create deploy nginx --image=nginx $do
export now="--force --grace-period 0" # k delete pod nginx $now

kubectl

Default Namespace

This is also in the kubectl cheat sheet docs but it’s best to memorize it

Set default namespace

k config set-context --current --namespace=<namespace-name>

Check current namespace

k config view | grep namespace

Executing Command on Running Pods

Execute a command in a running pod quickly without interactively executing into it

k exec mypod -- <command to run>
k exec mypod -- cat /log/app.log

Setting Image of a Running Resource

k set image <rsource-type>/<resource> <container-name>=<image>
k set image deployment/nginx nginx=nginx:1.9.1

Jsonpath

crictl

The test environment does not have docker installed so you need to know about crictl

The docker cli and crictl are very similar but there is a page in the k8s docs that has mappings if you get confused. Search for docker to crictl and click on Mapping from dockercli to crictl

# An example of using crictl with ssh to grab output and save it to a text file.
ssh <target> 'crictl logs <id>' &> /opt/file.txt
# Note: &> redirects standard error and standard out to the file.

Learn tmux

tmux is a terminal utility for creating and managing terminal sessions, windows and panes.

This is a great 12 minute video on using tmux

Sessions

tmux sessions are incredibly powerful as they manage processes in the background for you. You can create a tmux session and hop out of it and it will remain running, more importatnly if your ssh session is killed, you can hop back on and recconect to the tmux session and not loose your work.

A tmux session has a window status bar at the bottom of the terminal that shows the list of current windows in that session. Window default names are numeric (0, 1, 2, etc.), these can be renamed, see Windows below.

Windows

Each tmux session can have many windows. The window names are at the bottom of the terminal with an * indicating wwhich one is currently active. They are initially numbered but can be named as shown below

Panes

Panes are ways of splitting up a current window into multiple terminal areas. Panes can be vertically and horizontally split.

Scrolling and Copy Mode

For some reason unfathonable to me, tmux replaces the normal function of the up and down arrows of scrolling the terminal up and down with scrolling command history.

To scroll up and down in the termial press ctrl+b [ then you can use your normal navigation keys to scroll around (eg. Up Arrow or PgDn). Press q to quit scroll mode.

Alternatively you can press Ctrl-b PgUp to go directly into copy mode and scroll one page up

kubelet

File locations

Service Control and Status

# Get kubelet status with 
systemctl status kubelet
service kubelet status

# Start 
systemctl start kubelet
service kubelet start

# Stop
systemctl stop kubelet
service kubelet stop

Troubleshooting kubelet Startup

# check if kubelet is running
ps -aux | grep kubelet
# Should return a kubelet command, it may also return lines that have kubelet as a paramater, don't confuse these with the command itself.
# Example:
# root        3985  0.0  0.0 4152604 96500 ?       Ssl  22:06   0:44 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2

# Check if it's configured
service kubelet status
# if it is configured you will see a "Drop in:" line that is its configuration file
# in this case that is /etc/systemd/system/kubelet.service.d/10.kubead.conf

# Try to start it
service kubelet start
# Check it
service kubelet status
# In the 'Process:" line we see it's trying to start  /usr/local/bin/kubelet
# So we try to run the program from the command line
/usr/local/bin/kubelet
# Bash returns no file or directory, so this is the wrong executable path, let's find the real one
whereis kubelet
# or
which kubelet
# returns /usr/bin/kubelet
# In this case, it looks like the kubelet config is wrong so we fix it in its config file which we learned is at /etc/systemd/system/kubelet.service.d/10.kubead.conf from `service kubelet status` "Drop in:" line.

Expose Pod Information to Containers

In the kubernetes docs, search fo expose pod information and click on Expose Pod Information to Containers Through Environment…

This is useful for when you need to get information about your pod or its environment and pass it into that pod.

apiVersion: v1
kind: Pod
metadata:
  name: dapi-envars-fieldref
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
          printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
          sleep 10;
        done;
      env:
        - name: MY_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: MY_POD_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
  restartPolicy: Never

Etcd Backup

In the docs search for etcdctl backup and click on Operating etcd clusters for Kubernetes in the right hand navigation look for Backing up an etcd cluster –> Snapshot using etcdctl options

Look for this command:

ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
  snapshot save <backup-file-location>

Typically the certs can be found in /etc/kubernetes/pki/etcd, if they are not then look at the etcd pod speck in /etc/kubernetes/manifests or by describing the pod.

Services

Although you can use k create service it is better to use:
k expose <resource> <resource-name> --type=NodePort --port=80 --name=ingress --dry-run=client -o yaml > service.yaml
as this will create the right selector for you.

Once created always check the service with:
k get svc <svc-name>
and check the Endpoints: is populated with the ip of the target pod, if so then the service has bound to the resource.

Ports

--port is optional and will be copied from the resource being exposed if it is stated there, otherwise will need to state.
--targetPort is optional and is only needed if it is different from --port

NodePort

To expose as a nodePort use k expose <thing> --type=NodePort but you cannot give the nodePort value with k expose, it will assign one randomly. If you need to state a specific port then pipe the YAML to a file and edit it adding nodePort: <port> in the ports: section.

Volumes

In the docs search for volumes and click on Volumes
Use the right side navigation to find the different volume types, such as hostPath, emptyDir, etc

Persistent Volumes

Persistent Volumes specs can be found in the docs in the Volumes page and in the Persistent Volumes page.

Volume types are listed in the Persistent Volumes docs but they link to the Volumes docs. You need to know how to take the volume definition from a pod and insert it into a PV.

Example: hostPath. The pod spec for a hostPath volume looks like this.

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    # This bit goes into the PV
    hostPath:
      path: /data
      type: Directory # optional

And a PersistentVolume looks like this.
See how we take the volumes –> hostPath section and add it on to the PersistentVolume –> spec section.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem # optional
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle # optional
  hostPath:
    path: '/path/to/dir'

Persistent Volume Claim

Search for pvc in the docs. The PVC docs are in the same Persistent Volume page as PVs are.
Use the right side sub navigation to look for PersistentVolumeClaims for the spec.

To add a PVC to a pod, look for Claims As Volumes in the sub navigation just below PersistentVolumeClaims

Security Contexts

Search for security context in the docs and click on Configure a Security Context for a Pod or Container

A security context is used to set several security related capabilities including:

These can be set at the pod or container level and container level settings override pod level. In the example below the container would have runAsUser set to 2000

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo-2
spec:
  securityContext:
    runAsUser: 1000
  containers:
  - name: sec-ctx-demo-2
    image: gcr.io/google-samples/node-hello:1.0
    securityContext:
      runAsUser: 2000
      allowPrivilegeEscalation: false
      capabilities:
        add: ["NET_ADMIN", "SYS_TIME"]

Adding a User, Certificate Signing Requests, Roles

To add a user we create a Certificate Signing Request using the users certificate.
Remember that the users name is encoded in their certificate, we don’t provide it to Kubernetes any other way.

In the docs search for csr and select “Certificate Signing Requests”
In the right hand sub navigation click on Normal user
Follow the instructions there, here’s the outline:

  1. If not given, create a private key and certificate, otherwise use the cert given
  2. Create the CertificateSigningRequest
  3. Approve certificate signing request (user is now “created”)
  4. Get the certificate (if instructed and you need to auth with user via kubectl, otherwise skip)
  5. Create Role and RoleBinding (if instructed)
    1. Get and describe to make sure they created correctly and are in the correct name space
  6. Add to kubeconfig (if instructed)

These are all in the “Normal user” section mentioned above in the order shown here

Roles and Role Bindings

Instructions are also in the Certificate Signing Requests page, look for “Create Role and RoleBinding” which is step 5 from above.

Important: Remember to read the questions carefully and make sure you creating the correct resource type, Roles vs ClusterRoles and RoleBindings vs ClusterRoleBindings

If instructed to create role and rolebinding check if the user can perform the actions required:

# Normal User
k auth can-i <verb> <resource> --as <user> [-n <namespace>]
k auth can-i create pods --as john -n development

# Service Accounts
### Memorize this!
# The --as=system:. . .  is in the docs in the Authorization Overview page
k auth can-i <verb> <resource> --as=system:serviceaccount:<namespace>:<serviceaccountname> [-n <namespace>]
k auth can-i get pods --as=system:serviceaccount:development:my-dev-sa -n development

DNS

Pod and Service DNS can be found in the docs by searching for dns and selecting DNS for Services and Pods

Notes:

# Service FQDN is typically
<svc>.<namespace>.svc.cluster.local
my-service.default.svc.cluster.local

# Pod FQDN is typically
<pod-ip-dashed>.<namespace>.pod.cluster.local
10-5-2-7.default.pod.cluster.local

Networking

Network Policy

Search for network policy in the Kubernetes docs and select Network Policies

This is the example from the docs but I’ve added some notes below

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
    - Egress
  ingress:
    # The from field can be as complex or simple as you need it to be.
    # Example: If you want to allow all traffic from port 80, remove the from section 
    # and just have the ports section.
    # These are restrictions so remove what is not needed, do not try to add wildcards to cidr for example.
    - from:
        - ipBlock:
            cidr: 172.17.0.0/16
            except:
              - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              project: myproject
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 5978

Troubleshooting Commands

# nslookup
k run busybox --image=busybox -- sleep 5000
k exec busybox -- nslookup <endpoint>
# or, this will run the container allow you inside then remove the container when you exit
k run busybox --image=busybox --rm -it -- sh
# Inside the running container
> nmap <endpoint>

# curl
k run curl --image=alpine/curl -- sleep 5000
k exec curl -- curl http://<endpoint>

# Logs for a service
journalctl -u <service-name>

# To see what ports are open and how many connections they have do 
netstat -plnt

# List all interfaces
ip link
# Get IP
ip a
# find default gateway
ip route

Processes and Services

# Find what options a service is running with
ps -aux | grep kubelet | grep network

Running Commands in a Pod

command: ['echo', 'foo']
# vs  
command: ['sh', '-c', 'echo foo']
# Note that when using a subshell with `sh` the main thing you are running should be one string and not broken into array elements

Most of the time you should be able to just state the comands and not call a subshell, however, if you need access to shell variables or shell specific funcitonaltiy then you will need to run a sub shell with sh

yq

yq is the YAML equvilent to jq and is used to parse and sarch YAML files. Its syntax is quite a bit different from jq

See this for more

# pretty print a file
yq eval some.yaml

# basic indexing
yq eval ".user.addresses" user.yaml # Shows the array in array form
yq eval ".user.addresses[]" user.yaml # Flattens the array to KVPs (splat)
yq eval ".user.addresses[1]" user.yaml # Gets the second item in the array

# Select items with a * match
# Given:
# user:
#   orders:
#   - 4356436
#   - 4345753
#   - 2345234
yq eval '.user.orders[] | select(. == "43*")' user.yaml
# 4356436
# 4345753

# Sorting keys
yq eval 'sortKeys(.user)' user.yaml

Paths

Item Path
PKI Certs /etc/kubernetes/pki
Static Pods /etc/kubernetes/manifests
CNI Bin /opt/cni/bin/
CNI config /etc/cni/net.d/
Kubelet to kube-api /etc/kubernetes/kubelet.conf
Kubelet config /var/lib/kubelet/config.yaml
Kubelet Certs /var/lib/kubelet/pki/
kubelet service start /etc/systemd/system/kubelet.service.d/10-kubeadm.conf *

* The kubelet service startup command can be found by running service kubelet status and looking at the “Drop-In:” line.