Setting up a Fedora CoreOS Kubernetes Cluster

How to setup a container-optimized OS Cluster to use it with Kubernetes.

Get Fedora CoreOS

Get the image file here. I use the OVA.

Create a host-only network

On VirtualBox we should create vboxnet0 with range 192.168.99.0/24.

The gateway (.1) should be our host, so we'll create a mini HTTP server with Python to serve a file.

Butane file

As the file would be bigger than VBox maximum, we ought to use a remote file. This is the Butane file used on the VirtualBox VM:

variant: fcos
version: 1.4.0
ignition:
  config:
    replace:
      source: http://192.168.99.1:8080/fcos.ign

And the remote one:

variant: fcos
version: 1.4.0
storage:
  files:
    # CRI-O DNF module
    - path: /etc/dnf/modules.d/cri-o.module
      mode: 0644
      overwrite: true
      contents:
	inline: |
	  [cri-o]
	  name=cri-o
	  stream=1.17
	  profiles=
	  state=enabled
    # YUM repository for kubeadm, kubelet and kubectl
    - path: /etc/yum.repos.d/kubernetes.repo
      mode: 0644
      overwrite: true
      contents:
	inline: |
	  [kubernetes]
	  name=Kubernetes
	  baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
	  enabled=1
	  gpgcheck=0
	  repo_gpgcheck=0
	  gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
	    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    # configuring automatic loading of br_netfilter on startup
    - path: /etc/modules-load.d/br_netfilter.conf
      mode: 0644
      overwrite: true
      contents:
	inline: br_netfilter
    # setting kernel parameters required by kubelet
    - path: /etc/sysctl.d/kubernetes.conf
      mode: 0644
      overwrite: true
      contents:
	inline: |
	  net.bridge.bridge-nf-call-iptables=1
	  net.ipv4.ip_forward=1
passwd: # setting login credentials
  users:
    - name: core
      ssh_authorized_keys:
	- {YOUR_SSH_KEY}

Ignition

We create the two files: the one injected in VirtualBox and the "remote" one served on the host:

docker run --interactive --rm \
quay.io/coreos/butane:release \
--pretty --strict < fcos.bu > fcos.ign

docker run --interactive --rm \
quay.io/coreos/butane:release \
--pretty --strict < fcos_vbox.bu > fcos_vbox.ign

You could use podman instead of docker.

Server

On the folder you store the files, execute this:

python3 -m http.server 8080

Launch the machines

Here's a script I use to create the VMs:

#!/usr/bin/env sh

DIR="$HOME/Kubernetes"
IGN_PATH="$DIR/fcos_vbox.ign"
VM_NAME=minion$1
VBoxManage import --vsys 0 --vmname "$VM_NAME" fedora-coreos-36.20220918.3.0-virtualbox.x86_64.ova
VBoxManage guestproperty set "$VM_NAME" /Ignition/Config "$(cat $IGN_PATH)"
VBoxManage modifyvm "$VM_NAME" --natpf1 "guestssh,tcp,,222$1,,22"
VBoxManage modifyvm "$VM_NAME" --nic2 hostonlynet --host-only-net2 vboxnet0
VBoxManage startvm "$VM_NAME" --type headless

Launch it with ./script.sh N, with N being a number between 0 and 9.

Configure the machines

Change their names:

sudo -i
echo "minionN" > /etc/hostname

Then on the main node, install the tools:

sudo rpm-ostree install kubelet kubeadm kubectl cri-o

We don't need kubectl on the other nodes:

sudo rpm-ostree install kubelet kubeadm kubectl cri-o

Reboot the nodes. Then enable and start the crio and kubelet systemd's services on each node:

sudo systemctl enable --now crio kubelet

Edit Hosts file

Add the IPs and its hostnames. Trivial thing to do.

Init the cluster

On the main node (Control Plane) create this file and save it as clusterconfig.yaml:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.25.0
controllerManager:
  extraArgs: # specify a R/W directory for FlexVolumes (cluster won't work without this even though we use PVs)
    flex-volume-plugin-dir: "/etc/kubernetes/kubelet-plugins/volume/exec"
networking: # pod subnet definition
  podSubnet: 10.244.0.0/16
apiServer:
  certSANs:
  - "10.0.2.15"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "192.168.99.8"
  bindPort: 6443

Notice you should change the advertiseAddress to yours.

Launch the kubeadm:

kubeadm init --config clusterconfig.yaml

Annotate the join command, the output should be similar to this:

kubeadm join 192.168.99.8:6443 --token wmjfhl.mougk1kpm7h0bx2c \
        --discovery-token-ca-cert-hash sha256:1bdd4762665258142cc2c06f1084a6d81cf02199bc5f498bead13ed48cc07647

Change the IP to the 192.168.99.0/24 scoped one.

Network initialization

kubectl apply -f \
https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

Deploy something

Now it's up to you! Don't forget to get the kubeconfig file.

Emacs 28.1 (Org mode 9.5.2)