Open16

シングルノードでおうちkubernetesクラスタを立ててみる

shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー

今回使用する Beelink SEi12 ミニ PC は DEL キーで BIOS に入れる。ブータブルUSBをブートの最初の順番に設定ておく

shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー

ここまで wifi 環境下で設定してきたが、有線でつなげたほうが後々安定すると思ったので有線へ移行。その差異に以下のコマンドを実行して固定IPを振った。事前に ip a 等で対象のネットワークインターフェースを確認しておく。

sudo nmcli c mod enp3s0 ipv4.addresses 192.168.0.2/24
sudo nmcli con mod enp3s0 ipv4.gateway 192.168.0.1
sudo nmcli c mod enp3s0 ipv4.dns 192.168.0.1
sudo nmcli c mod enp3s0 ipv4.method manual
sudo nmcli c mod enp3s0 connection.autoconnect yes
shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー
sudo dnf install avahi avahi-tools
sudo vi /etc/avahi/avahi-daemon.conf
[server]
#host-name=foo
#domain-name=local
#browse-domains=0pointer.de, zeroconf.org
use-ipv4=yes
use-ipv6=yes
allow-interfaces=enp3s0 # 対象のインターフェースへ修正
#deny-interfaces=eth1
#check-response-ttl=no
#use-iff-running=no
#enable-dbus=yes
#disallow-other-stacks=no
#allow-point-to-point=no
#cache-entries-max=4096
#clients-max=4096
#objects-per-client-max=1024
#entries-per-entry-group-max=32
ratelimit-interval-usec=1000000
ratelimit-burst=1000
shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー

swap を停止する

sudo swapoff -a
sudo vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sun Oct 13 23:25:11 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rl-root     /                       xfs     defaults        0 0
UUID=484e32f9-88b2-4558-a0cd-5ee4db10cc7e /boot                   xfs     defaults        0 0
UUID=6D50-DDB8          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
/dev/mapper/rl-home     /home                   xfs     defaults        0 0
# /dev/mapper/rl-swap     none                    swap    defaults        0 0 # ここをコメントアウトしておく
shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー

containerd をインストールする

sudo dnf install -y wget tar
wget https://github.com/containerd/containerd/releases/download/v1.7.22/containerd-1.7.22-linux-amd64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.7.22-linux-amd64.tar.gz

containerd が自動で起動するようにしておく

sudo mkdir -p /usr/local/lib/systemd/system
sudo wget -P /usr/local/lib/systemd/system/ https://raw.githubusercontent.com/containerd/containerd/refs/tags/v1.7.22/containerd.service
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー

kubernetes をインストールする

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
EOF

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー

クラスタの立ち上げ

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

以下のようなエラーが出た

[init] Using Kubernetes version: v1.31.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING Hostname]: hostname "cybercluster" could not be reached
        [WARNING Hostname]: hostname "cybercluster": lookup cybercluster on 192.168.0.1:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

以下の部分に対応する

        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

/proc/sys/net/ipv4/ip_forward の値を 1 にする

echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward

再度クラスタ立ち上げ

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

成功すると以下のような表示が出る

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

  kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

メッセージに従って以下を実行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl コマンドを実行すると以下のようにエラーが表示される

kubectl get all -A
E1018 11:49:55.639432    9026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.0.2:6443/api?timeout=32s\": dial tcp 192.168.0.2:6443: connect: connection refused"
E1018 11:49:55.641060    9026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.0.2:6443/api?timeout=32s\": dial tcp 192.168.0.2:6443: connect: connection refused"
E1018 11:49:55.642344    9026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.0.2:6443/api?timeout=32s\": dial tcp 192.168.0.2:6443: connect: connection refused"
E1018 11:49:55.643649    9026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.0.2:6443/api?timeout=32s\": dial tcp 192.168.0.2:6443: connect: connection refused"
E1018 11:49:55.644911    9026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.0.2:6443/api?timeout=32s\": dial tcp 192.168.0.2:6443: connect: connection refused"
The connection to the server 192.168.0.2:6443 was refused - did you specify the right host or port?

一部ファイアウォールを許可する

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --reload
sudo firewall-cmd --list-ports

再度 kubectl コマンドを試す

$ kubectl get all -A
NAMESPACE     NAME                                       READY   STATUS             RESTARTS         AGE
kube-system   pod/coredns-7c65d6cfc9-9q9gr               0/1     Pending            0                24m
kube-system   pod/coredns-7c65d6cfc9-h8brq               0/1     Pending            0                24m
kube-system   pod/etcd-cybercluster                      1/1     Running            12 (5m11s ago)   25m
kube-system   pod/kube-apiserver-cybercluster            1/1     Running            13 (4m44s ago)   25m
kube-system   pod/kube-controller-manager-cybercluster   1/1     Running            14 (111s ago)    25m
kube-system   pod/kube-proxy-h44hq                       0/1     CrashLoopBackOff   13 (78s ago)     24m
kube-system   pod/kube-scheduler-cybercluster            1/1     Running            10 (77s ago)     24m

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  25m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   24m

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         0       1            0           kubernetes.io/os=linux   24m

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   0/2     2            0           24m

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-7c65d6cfc9   2         2         0       24m

control plane で pod が動作するようにする

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
shunk031 a.k.a しゅんけーshunk031 a.k.a しゅんけー

Calico をインストールする

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml