Just sharing this for my future reference as well.
Install Ubuntu
Raspberry Pi OS (previously Raspbian) has not released its 64-bit build yet. I also do not want to be bothered by the iptables work around for Traefik in Raspberry Pi OS.
- Download the 64-bit image here
- Restore the image to your SD card/s.
- Unlike in Raspberry Pi OS, do not need to create an ssh file in /boot to enable ssh.
- append:
to yourcgroup_memory=1 cgroup_enable=memory
/boot/firmware/cmdline.txt
file. - If you forgot the previous step, just
ssh
in to your Pi, modify said file and reboot.
Install K3s
- On your master node: If your master node is behind a router (i.e. you are port forwarding),
Otherwise,curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san [your router IP]" sh -s -
curl -sfL https://get.k3s.io | sh -s -
- Check master K3s status with:
No error? Good.systemctl status k3s.service
- Copy the k3s kube config file to your client (probably your desktop pc). The file should be located at
/etc/rancher/k3s/k3s.yaml
on master node. Copy it to your client's.kube/config
- Modify
.kube/config
file, changeclusters[0].cluster.server
value fromhttps://127.0.0.1:6443
to whatever your master node ip is (if you are port forwarding, your router's IP) example:https://192.168.1.1:6443
. - Check connection from client with:
Expected result:kubectl get node
NAME STATUS ROLES AGE VERSION master Ready control-plane,master 10m v1.20.5+k3s1
- Get the K3s token on your master with:
sudo cat /var/lib/rancher/k3s/server/node-token
- On your worker nodes:
curl -sfL https://get.k3s.io | K3S_URL=https://[master IP]:6443 K3S_TOKEN="[K3s token]" sh -
- Check worker K3s status with:
No error? Good.systemctl status k3s-agent.service
- Check connection from client with:
Expected result:kubectl get node
NAME STATUS ROLES AGE VERSION master Ready control-plane,master 20m v1.20.5+k3s1 worker Ready <none> 10m v1.20.5+k3s1
- Reference: https://rancher.com/docs/k3s/latest/en/
- On your master node: If your master node is behind a router (i.e. you are port forwarding),
Optional: Install Web UI (Dashboard)
- On your client, run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
- Create admin user
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard EOF
- Give cluster admin role to admin user
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard EOF
- Get bearer token:
kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token
- Run proxy from client
kubectl proxy
- On your browser, open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. Enter the bearer token you have obtained.
- Reference: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
- On your client, run:
Optional: Install Longhorn
Installing longhorn will allow you to have dynamic provisioning for persistent volume claims.- On your client, run:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
- Wait a little while, you may check the status of the pods via the web ui or via:
kubectl -n longhorn-system get pods
- Test! Create a PVC. On your client, run:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc spec: accessModes: - ReadWriteOnce storageClassName: longhorn resources: requests: storage: 1Gi EOF
- Test! Create a Pod that uses the PVC. On your client, run:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: volume-test image: nginx:stable-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: test-pvc mountPath: /data ports: - containerPort: 80 volumes: - name: test-pvc persistentVolumeClaim: claimName: test-pvc EOF
- Check status, on your client, run:
If all goes well, you should see the pod running well:kubectl get pods
NAME READY STATUS RESTARTS AGE volume-test 1/1 Running 0 29s
- Clean up:
kubectl delete pvc/test-pvc pod/volume-test
- Optionally, you can open up Longhorn UI Ingress with:
If you do not own a domain, just mock it withapiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: longhorn-system name: longhorn-ingress annotations: kubernetes.io/ingress.class: "traefik" spec: rules: - host: longhorn.example.com http: paths: - path: / pathType: Prefix backend: service: name: longhorn-frontend port: number: 80
/etc/hosts
and it should work just fine. - If you are not serious about storage replicas, you may want to change it to 1 replica, default is 3 replicas.
changekubectl -n longhorn-system edit cm/longhorn-storageclass
numberOfReplicas
to 1 - Reference: https://rancher.com/docs/k3s/latest/en/storage/
- On your client, run:
Optional: Use USB Storage Device for Longhorn
- Connect your USB storage device to one of your nodes.
ssh
into that node.- Create mount point
sudo mkdir /media/storage
- Get the PARTUUID with:
You should have something like:sudo blkid
/dev/sda1: LABEL="mystorage" UUID="[device uuid]" TYPE="ext4" PARTUUID="[partition uuid]"
- modify your
/etc/fstab
file by adding the line (note: my external device file system is ext4):PARTUUID=[partition uuid] /media/storage ext4 defaults,noatime,nodiratime 0 2
- You can test if it mounted correctly with:
sudo mount -a
- K3s will run as root so it will have full access to the device. You can optionally create a group for the directory and add yourself to give yourself access to the files:
sudo groupadd [group name] sudo usermod -aG [group name] [your username] sudo chown -R :[group name] /media/storage # set the gid bit so all files/directories created will have the same group as the parent directory sudo chmod g+s /media/storage
- Add the storage to longhorn, access the longhorn UI, go to nodes and select "Edit node and disks", click "Add Disk". Fill up the details and click save.
- And you are good to go!