Using NFS storage for dynamic provisioning on kubernetes

This replaces Step 9 in this post and uses NFS instead of rook-ceph. If you have a Synology, you can use NFS. See this link to turn on NFS on the Synology, so that your kubernetes cluster can use it. Your NFS permission should include Allowing connections from non-privileged ports and Allow users to access mounted subfolders.

Step: 9) Setup NFS for storage

At this point, the cluster works but can only run stateless pods. The moment a pod is terminated, all the information with it is gone. To run stateful pods, for example StatefulSets, you need to have persistent storage. We’ll use NFS to provide that. This assumes you already have an NFS server set up, for example a NetApp, Synology, another NFS server, etc. First you need to install NFS client on every node.
:~$ sudo apt install nfs-common
Only on the master node, get the deployment file for nfs-client
draconpern@k8s-master:~$ wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/deployment.yaml
draconpern@k8s-master:~$ nano deployment.yaml
Change the server and path to your nfs mount. Change 10.10.10.60 to your server and ‘/ifs/kubernetes’ to the nfs mount point on your server. For example..
            - name: NFS_SERVER
              value: 192.168.1.9
            - name: NFS_PATH
              value: /volume1/kubernetes
        volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.9
            path: /volume1/kubernetes
Apply the files
draconpern@k8s-master:~$ kubectl apply -f deployment.yaml
draconpern@k8s-master:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
draconpern@k8s-master:~$ wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/class.yaml
Add additional parameters in class.yaml
allowVolumeExpansion: "true"
reclaimPolicy: "Delete"
draconpern@k8s-master:~$ kubectl apply -f class.yaml
Unset rook-ceph as default and set nfs-client as the default
draconpern@k8s-master:~$ kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 
draconpern@k8s-master:~$ kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
You should see an NFS connection to your server for every worker node you have. Continue to Step 13 on the original post.

Baremetal Kubernetes without MAAS

This is how I built a 4 node Kubernetes cluster on Ubuntu without using MAAS. Software used include Kubernetes (1.16) and Ubuntu 18.04.3 LTS. Recommended minimum number of machine is 3, but they can all virtual or mixed VM and baremetal. You’ll get a cluster with one Kubernetes Master Node and with three nodes acting as Worker node. The cluster will support running most containers and features including dynamic storage provisioning, distributed storage, load balanced services and dedicated application IP. We will start with 4 machines. The master node needs a minimum of 4Gigs of RAM and one OS disk. The three worker nodes will need at least 8Gigs of RAM, OS disk and optionally one disk for storage. We’ll start with the following minimally installed systems. They will need to have static ip’s, with DNS entries being optional. The network subnet is 192.168.1.0/24 with a DHCP range of 192.168.1.50-192.168.1.150
  • Kubernetes Master Node – (Hostname: k8s-master, IP : 192.168.1.40, OS : Ubuntu 18.04 LTS)
  • Kubernetes Worker Node 1 – (Hostname: k8s-node1, IP: 192.168.1.41 , OS : Ubuntu 18.04 LTS)
  • Kubernetes Worker Node 2 – (Hostname: k8s-node2, IP: 192.168.1.42 , OS : Ubuntu 18.04 LTS)
  • Kubernetes Worker Node 3 – (Hostname: k8s-node3, IP: 192.168.1.43 , OS : Ubuntu 18.04 LTS)

Step:1) Set Hostname and update hosts file

You would usually set the hostname and ip addresses during install, but if not.. Login to the master node and configure its hostname using the hostnamectl command
draconpern@localhost:~$ sudo hostnamectl set-hostname "k8s-master"
draconpern@localhost:~$ exec bash
draconpern@k8s-master:~$
Login to worker nodes and configure their hostname respectively using the hostnamectl command,
draconpern@localhost:~$ sudo hostnamectl set-hostname k8s-node1
draconpern@localhost:~$ exec bash
draconpern@k8s-node1:~$

draconpern@localhost:~$ sudo hostnamectl set-hostname k8s-node2
draconpern@localhost:~$ exec bash
draconpern@k8s-node2:~$

draconpern@localhost:~$ sudo hostnamectl set-hostname k8s-node3
draconpern@localhost:~$ exec bash
draconpern@k8s-node3:~$
Add the following lines in /etc/hosts file on all three systems,
draconpern@k8s-master:~$ sudo nano /etc/hosts
192.168.1.40     k8s-master
192.168.1.41     k8s-node1
192.168.1.42     k8s-node2
192.168.1.43     k8s-node3
Edit the /etc/netplan/50-cloud-init.yaml file and change to static ip. For example on master. (Make sure you use spaces, all yaml require spaces for indention)
draconpern@k8s-master:~$ sudo nano /etc/netplan/50-cloud-init.yaml
network:
     ethernets:
         eth0:
             addresses:
             - 192.168.1.40/24
             gateway4: 192.168.1.1
             nameservers:
                 addresses:
                 - 192.168.1.1
                 search:
                 - draconpern.local
     version: 2

Step:2) Install and Start Docker Service on Master and Worker Nodes

Run the below apt-get command to install Docker on all nodes,
draconpern@k8s-master:~$ sudo apt-get install docker.io -y
Run the below apt-get command to install docker on worker nodes,
draconpern@k8s-node1:~$ sudo apt-get install docker.io -y
draconpern@k8s-node2:~$ sudo apt-get install docker.io -y
draconpern@k8s-node3:~$ sudo apt-get install docker.io -y
Override the default docker unit file. (For a reason why this is required: https://kubernetes.io/docs/setup/production-environment/container-runtimes/)
draconpern@k8s-master:~$ sudo systemctl edit docker
draconpern@k8s-node1:~$ sudo systemctl edit docker
draconpern@k8s-node2:~$ sudo systemctl edit docker
draconpern@k8s-node3:~$ sudo systemctl edit docker
You’ll go into an editor with no content. Enter the following into it.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
Once the Docker packages are installed on all four systems, restart and enable the docker service using the below systemctl commands, these commands needs to be executed on master and worker nodes. (The restart is for changing the cgroupdriver).
:~$ sudo systemctl restart docker
:~$ sudo systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
:~$
The docker command should verify which Docker version has been installed.
:~$ docker --version
Docker version 18.09.7, build 2d0083d
:~$

Step:3) Configure Kubernetes Package Repository on Master & Worker Nodes

Note: All the commands in this step needs to be run on master and worker nodes. Add Kubernetes package repository key using the following command,
draconpern@k8s-master:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
draconpern@k8s-node1:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
draconpern@k8s-node2:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
draconpern@k8s-node3:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
 
Now configure Kubernetes repository, at this point in time Ubuntu 18.04 (bionic weaver) Kubernetes package repository is not yet available, so we will be using the Xenial Kubernetes repository. Do this for master and nodes.
:~$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

Step:4) Disable Swap and Install Kubeadm on all the Nodes

Note: All the commands in this step must be run on master and worker nodes You must disable swap on all nodes for k8s to install. Run the following command to disable swap temporarily,
:~$ sudo swapoff -a
You also need to disable the swap permanently by commenting out swapfile or swap partition entry in the /etc/fstab file. Use nano to edit the file and put a ‘#’ at the beginning of the swap.img line.
:~$ sudo nano /etc/fstab
#/swap.img      none    swap
Now Install Kubeadm package on all the nodes including master.
:~$ sudo apt-get install kubeadm -y
Once kubeadm packages are installed successfully, verify the kubeadm version.
:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
:~$

Step:5) Initialize and Start Kubernetes Cluster on Master Node using Kubeadm

Use the below kubeadm command on Master Node to initialize Kubernetes
draconpern@k8s-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
In the above command, you can use the same pod network or choose your own pod network if your network is overlapping. Keep the /16 subnet size. If the command is successful, you’ll get instructions on copying the configuration and also a command line for joining computers to the cluster. Copy the join command into a text file for later use. Copy the configuration to your profile by running,
draconpern@k8s-master:~$ mkdir -p $HOME/.kube
draconpern@k8s-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
draconpern@k8s-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
draconpern@k8s-master:~$
Verify the status of master node using the following command,
draconpern@k8s-master:~$ kubectl get nodes
NAME               STATUS        ROLES    AGE   VERSION
k8s-master         NotReady      master   2m    v1.16.0
As we can see in the above output, our master node is not ready because we haven’t installed a pod network. We will deploy Calico as our pod network, it will provide the overlay network between cluster nodes and provide pod communication.

Step:6) Deploy Calico as Pod Network from Master node and verify Pod Namespaces

Download the calico yaml file
draconpern@k8s-master:~$ wget https://docs.projectcalico.org/manifests/calico.yaml
Edit calico.yaml and change the entry for CALICO_IPV4POOL_CIDR from 192.168.0.0/16 to 10.244.0.0/16. If you don’t find this, there is no need to edit the file. This should be the same CIDR as the one used in step 5. If you don’t do this, the cluster will look like it works, but communication between pods of different hosts will fail.
draconpern@k8s-master:~$ nano calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16" 
Execute the following kubectl command to deploy the pod network from master node
draconpern@k8s-master:~$ kubectl apply -f calico.yaml
Output of above command should be something like below
draconpern@k8s-master:~$ kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
draconpern@k8s-master:~$
After the command returns, it will take a few seconds for the network to come up. Now verify the master node status and pod namespaces using kubectl command. It make take a few seconds until the status changes from NotReady to Ready.
draconpern@k8s-master:~$ sudo  kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   11m   v1.16.0
draconpern@k8s-master:~$

draconpern@k8s-master:~$ kubectl get pods -A
NAMESPACE        NAME                                       READY   STATUS              RESTARTS   AGE
kube-system      calico-kube-controllers-6895d4984b-xs45k   1/1     Running             0          15m
kube-system      calico-node-756lr                          1/1     Running             0          15m
kube-system      coredns-5644d7b6d9-6hgww                   1/1     Running             0          15m
kube-system      coredns-5644d7b6d9-7l8vc                   1/1     Running             0          15m
kube-system      etcd-k8s-master                            1/1     Running             0          15m
kube-system      kube-apiserver-k8s-master                  1/1     Running             0          15m
kube-system      kube-controller-manager-k8s-master         1/1     Running             0          15m
kube-system      kube-proxy-2hbgd                           1/1     Running             0          15m
kube-system      kube-scheduler-k8s-master                  1/1     Running             0          15m
draconpern@k8s-master:~$
As we can see in the above output our master node status has changed to “Ready” and all the pod in all the namespaces are in the running state, so this confirms that our master node is healthy and ready to form a cluster.

Step:7) Add Worker Nodes to the Cluster

Note: In Step 5, kubeadm printed a command which we will need to use on worker nodes to join a cluster. (Your token and hash will be different, and it can always be regenerated.) Login to the first worker node (k8s-node1) and run the following command to join the cluster,
draconpern@k8s-node1:~$ sudo kubeadm join 192.168.1.40:6443 --token 1wx3sk.hjkd54juaxlov7d2 --discovery-token-ca-cert-hash sha256:5bc67b66720b048dea438578c9591cc5095f572c5dbf240aca0c3e0620a917f3
Similarly run the same kubeadm join command on the rest of the worker nodes,
draconpern@k8s-worker-node2:~$ sudo kubeadm join 192.168.1.40:6443 --token 1wx3sk.hjkd54juaxlov7d2 --discovery-token-ca-cert-hash sha256:5bc67b66720b048dea438578c9591cc5095f572c5dbf240aca0c3e0620a917f3
Now go to master node to check master and worker node status
draconpern@k8s-master:~$ kubectl get nodes
NAME               STATUS   ROLES    AGE    VERSION
k8s-master         Ready    master   100m   v1.16.0
k8s-node1   Ready    <none>   10m    v1.16.0
k8s-node2   Ready    <none>   12m    v1.16.0
k8s-node3   Ready    <none>   13m    v1.16.0
draconpern@k8s-master:~$

Step: 8) Install a Baremetal Load Balancer, MetalLB

We can install metallb directly from the yaml file.
draconpern@k8s-master:~$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
Create a configuration file e.g metallb-config.yaml on k8s-master, here I use a range of free ip that’s not used by any machine or DHCP on the same network as the nodes. For example, for this network we pick 192.168.1.200-192.168.1.250 to avoid the DHCP range of 192.168.1.50-192.168.1.150.
draconpern@k8s-master:~$ nano metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.200-192.168.1.250
Apply the file
draconpern@k8s-master:~$ kubectl apply -f metallb-config.yaml
Verify the load balancer works. First create a test deployment of nginx.
draconpern@k8s-master:~$ kubectl create deployment nginx -–image=nginx 
Create a file e.g. test.yaml with the following
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
Then apply with,
draconpern@k8s-master:~$ kubectl apply -f test.yaml
Get a list of services,
draconpern@k8s-master:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.96.180.255 192.168.1.240 80:30752/TCP 21s
You can browse to 192.168.1.240 and you’ll get an nginx page. Remove the test service,
draconpern@k8s-master:~$ kubectl delete -f test.yaml
draconpern@k8s-master:~$ kubectl delete deployment nginx

Step: 9) Setup Rook-Ceph for Storage

Note: If you have a Synology, you should use NFS, steps here. A limitation of rook-ceph is that the storage can’t be shared between pods. At this point, the cluster works but can only run stateless pods. The moment a pod is terminated, all the information with it is gone. To run stateful pods, for example StatefulSets, you need to have persistent storage. We’ll use Rook Ceph to provide that. Install rook-ceph
draconpern@k8s-master:~$ kubectl apply -f https://github.com/rook/rook/raw/release-1.1/cluster/examples/kubernetes/ceph/common.yaml
draconpern@k8s-master:~$ kubectl apply -f https://github.com/rook/rook/raw/release-1.1/cluster/examples/kubernetes/ceph/operator.yaml
Install the rbd commandline utility needed to mount rbd volumes on each node.
draconpern@k8s-node1:~$ sudo apt-get install ceph-common 
draconpern@k8s-node2:~$ sudo apt-get install ceph-common  
draconpern@k8s-node3:~$ sudo apt-get install ceph-common  
Next, download the cluster.yaml on the master node.
draconpern@k8s-master:~$ wget https://github.com/rook/rook/raw/release-1.1/cluster/examples/kubernetes/ceph/cluster.yaml
If you have the optional data drive on the worker nodes, go to the next step. Otherwise, go to step 11 to set up a directory for data storage

Step: 10) Use Data Drive for Storage

Run the following command on the nodes to wipe the partition on the data drive. Warning, be sure to use the correct drive!! Note the block size of 1024 to remove previous installation of ceph data. If you don’t want to format the drive, then use step 11.
draconpern@~$ dd if=/dev/zero of=/dev/sdb bs=1024 count=1
Apply the default rook-ceph configuration
draconpern@k8s-master:~$ kubectl apply -f cluster.yaml
You are done! Skip the next step and go to Step 12 to create the storage class

Step: 11) Use Directory for Storage

Edit cluster.yaml and find the directories lines. Uncomment the two lines by removing the # from the beginning. This will use the /var/lib/rook directory on each worker node for storage. You can change the path to whatever you want. The directory should be created by root.
draconpern@k8s-master:~$ nano cluster.yaml 
directories:
- path: /var/lib/rook
Note: rook-ceph only supports ext4 and xfs. It will not work if the directory is on a btrfs volume. Apply the cluster file
 draconpern@k8s-master:~$ kubectl apply -f cluster.yaml

Step: 12) Create the Storage Class and make it the Default for the Cluster.

draconpern@k8s-master:~$ kubectl apply -f https://github.com/rook/rook/raw/release-1.1/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml
draconpern@k8s-master:~$ kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Step: 13) Verify and try it out

draconpern@k8s-master:~$ kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
k8s-master         Ready    master   9d    v1.16.0
k8s-node1   Ready             9d    v1.16.0
k8s-node2   Ready             9d    v1.16.0
k8s-node3   Ready             9d    v1.16.0
Follow this example to get a fully working application. https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

Step: 14) Prevent accidental upgrades

kubelet shouldn’t be upgraded automatically. To make sure it doesn’t happen. Run the following on each node.
draconpern@~$ sudo apt-mark hold kubelet

Extra Bonus

Here’s some common commands you might want to try out. Add
alias k=kubectl
complete -F __start_kubectl k
to end of .bashrc to use ‘k’ instead of typing kubectl.
shell completion on master:~$ echo "source <(kubectl completion bash)" >> /etc/bash_completion.d/kubectl
Running pods on master:~$ kubectl taint nodes --all node-role.kubernetes.io/master-
Stop running pods on master:~$ kubectl taint nodes k8s-master node-role.kubernetes.io/master=:NoSchedule
Need to pull images from a private registry? On master and every node, edit /etc/docker/daemon.json and add insecure-registries.
:~$ sudo nano /etc/docker/daemon.json
{
   "insecure-registries":[
    "jenkins:5000"
  ]
}
If k8s is having trouble terminating some pods, disable apparmor so that kubernetes can delete pods from docker faster. This has to be done on master and worker nodes.
 :~$ sudo systemctl disable apparmor.service --now

Using letencrypt and certbot on cygwin

To use certbot on Cygwin, install the python3-pip package using the setup from http://www.cygwin.com. Then install the certbot python package by running

pip3 install certbot

You can set up auto renewal by setting up a Windows Task to run every day with the action.

Program: C:\cygwin64\bin\bash.exe
Add arguments: -c "certbot renew -n"

Your certificate is store in C:\cygwin64\etc\letsencrypt\archive\

AT&T U-verse Pace Plc 5268AC bridge mode (with caveat)

Steps to enable bridge mode on Pace Plc 5268AC (tested with software version 10.6.0.530094-att). Bridge mode is needed to use your own router (but it still needs to be plugged into the 5268AC). Here is the caveat: ipsec (site-to-site VPN) will not work. AT&T blocks it, so you have to upgrade to business class service to get it unblocked. This instruction is aimed for regular U-verse service w/o static IP/block. If you have a static block, you should use the Cascaded Router feature.
Continue reading